0s autopkgtest [10:27:19]: starting date and time: 2024-06-16 10:27:19+0000 0s autopkgtest [10:27:19]: git checkout: 433ed4c Merge branch 'skia/nova_flock' into 'ubuntu/5.34+prod' 0s autopkgtest [10:27:19]: host juju-7f2275-prod-proposed-migration-environment-9; command line: /home/ubuntu/autopkgtest/runner/autopkgtest --output-dir /tmp/autopkgtest-work.e7ks52md/out --timeout-copy=6000 --setup-commands 'ln -s /dev/null /etc/systemd/system/bluetooth.service; printf "http_proxy=http://squid.internal:3128\nhttps_proxy=http://squid.internal:3128\nno_proxy=127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,ports.ubuntu.com,security.ubuntu.com,ddebs.ubuntu.com,changelogs.ubuntu.com,keyserver.ubuntu.com,launchpadlibrarian.net,launchpadcontent.net,launchpad.net,10.24.0.0/24,keystone.ps5.canonical.com,objectstorage.prodstack5.canonical.com\n" >> /etc/environment' --apt-pocket=proposed=src:traitlets --apt-upgrade jupyter-notebook --timeout-short=300 --timeout-copy=20000 --timeout-build=20000 --env=ADT_TEST_TRIGGERS=traitlets/5.14.3-1 -- lxd -r lxd-armhf-10.145.243.242 lxd-armhf-10.145.243.242:autopkgtest/ubuntu/oracular/armhf 30s autopkgtest [10:27:49]: testbed dpkg architecture: armhf 31s autopkgtest [10:27:50]: testbed apt version: 2.9.5 31s autopkgtest [10:27:50]: @@@@@@@@@@@@@@@@@@@@ test bed setup 39s Get:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease [110 kB] 39s Get:2 http://ftpmaster.internal/ubuntu oracular-proposed/universe Sources [389 kB] 40s Get:3 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse Sources [2576 B] 40s Get:4 http://ftpmaster.internal/ubuntu oracular-proposed/restricted Sources [7052 B] 40s Get:5 http://ftpmaster.internal/ubuntu oracular-proposed/main Sources [36.1 kB] 40s Get:6 http://ftpmaster.internal/ubuntu oracular-proposed/main armhf Packages [34.8 kB] 40s Get:7 http://ftpmaster.internal/ubuntu oracular-proposed/restricted armhf Packages [1860 B] 40s Get:8 http://ftpmaster.internal/ubuntu oracular-proposed/universe armhf Packages [293 kB] 40s Get:9 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse armhf Packages [2528 B] 40s Fetched 877 kB in 1s (1040 kB/s) 40s Reading package lists... 59s tee: /proc/self/fd/2: Permission denied 81s Hit:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease 81s Hit:2 http://ftpmaster.internal/ubuntu oracular InRelease 81s Hit:3 http://ftpmaster.internal/ubuntu oracular-updates InRelease 81s Hit:4 http://ftpmaster.internal/ubuntu oracular-security InRelease 82s Reading package lists... 82s Reading package lists... 82s Building dependency tree... 82s Reading state information... 83s Calculating upgrade... 83s The following packages will be upgraded: 83s libldap-common libldap2 84s 2 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 84s Need to get 203 kB of archives. 84s After this operation, 0 B of additional disk space will be used. 84s Get:1 http://ftpmaster.internal/ubuntu oracular/main armhf libldap-common all 2.6.7+dfsg-1~exp1ubuntu9 [31.5 kB] 84s Get:2 http://ftpmaster.internal/ubuntu oracular/main armhf libldap2 armhf 2.6.7+dfsg-1~exp1ubuntu9 [171 kB] 84s Fetched 203 kB in 0s (479 kB/s) 84s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58402 files and directories currently installed.) 84s Preparing to unpack .../libldap-common_2.6.7+dfsg-1~exp1ubuntu9_all.deb ... 84s Unpacking libldap-common (2.6.7+dfsg-1~exp1ubuntu9) over (2.6.7+dfsg-1~exp1ubuntu8) ... 85s Preparing to unpack .../libldap2_2.6.7+dfsg-1~exp1ubuntu9_armhf.deb ... 85s Unpacking libldap2:armhf (2.6.7+dfsg-1~exp1ubuntu9) over (2.6.7+dfsg-1~exp1ubuntu8) ... 85s Setting up libldap-common (2.6.7+dfsg-1~exp1ubuntu9) ... 85s Setting up libldap2:armhf (2.6.7+dfsg-1~exp1ubuntu9) ... 85s Processing triggers for man-db (2.12.1-2) ... 85s Processing triggers for libc-bin (2.39-0ubuntu9) ... 86s Reading package lists... 86s Building dependency tree... 86s Reading state information... 87s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 89s autopkgtest [10:28:48]: rebooting testbed after setup commands that affected boot 128s autopkgtest [10:29:27]: testbed running kernel: Linux 6.5.0-35-generic #35~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue May 7 11:19:33 UTC 2 154s autopkgtest [10:29:53]: @@@@@@@@@@@@@@@@@@@@ apt-source jupyter-notebook 172s Get:1 http://ftpmaster.internal/ubuntu oracular/universe jupyter-notebook 6.4.12-2.2ubuntu1 (dsc) [3886 B] 172s Get:2 http://ftpmaster.internal/ubuntu oracular/universe jupyter-notebook 6.4.12-2.2ubuntu1 (tar) [8501 kB] 172s Get:3 http://ftpmaster.internal/ubuntu oracular/universe jupyter-notebook 6.4.12-2.2ubuntu1 (diff) [49.6 kB] 172s gpgv: Signature made Thu Feb 15 18:11:52 2024 UTC 172s gpgv: using RSA key D09F8A854F1055BCFC482C4B23566B906047AFC8 172s gpgv: Can't check signature: No public key 172s dpkg-source: warning: cannot verify inline signature for ./jupyter-notebook_6.4.12-2.2ubuntu1.dsc: no acceptable signature found 173s autopkgtest [10:30:12]: testing package jupyter-notebook version 6.4.12-2.2ubuntu1 175s autopkgtest [10:30:14]: build not needed 178s autopkgtest [10:30:17]: test pytest: preparing testbed 187s Reading package lists... 187s Building dependency tree... 187s Reading state information... 188s Starting pkgProblemResolver with broken count: 0 188s Starting 2 pkgProblemResolver with broken count: 0 188s Done 188s The following additional packages will be installed: 188s fonts-font-awesome fonts-glyphicons-halflings fonts-lato fonts-mathjax gdb 188s jupyter-core jupyter-notebook libbabeltrace1 libc6-dbg libdebuginfod-common 188s libdebuginfod1t64 libdw1t64 libjs-backbone libjs-bootstrap 188s libjs-bootstrap-tour libjs-codemirror libjs-es6-promise libjs-jed 188s libjs-jquery libjs-jquery-typeahead libjs-jquery-ui libjs-marked 188s libjs-mathjax libjs-moment libjs-requirejs libjs-requirejs-text 188s libjs-sphinxdoc libjs-text-encoding libjs-underscore libjs-xterm libnorm1t64 188s libpgm-5.3-0t64 libpython3.12t64 libsodium23 libsource-highlight-common 188s libsource-highlight4t64 libzmq5 node-jed python-notebook-doc 188s python-tinycss2-common python3-argon2 python3-asttokens python3-bleach 188s python3-bs4 python3-bytecode python3-comm python3-coverage python3-dateutil 188s python3-debugpy python3-decorator python3-defusedxml python3-entrypoints 188s python3-executing python3-fastjsonschema python3-html5lib python3-iniconfig 188s python3-ipykernel python3-ipython python3-ipython-genutils python3-jedi 188s python3-jupyter-client python3-jupyter-core python3-jupyterlab-pygments 188s python3-matplotlib-inline python3-mistune python3-nbclient python3-nbconvert 188s python3-nbformat python3-nest-asyncio python3-notebook python3-packaging 188s python3-pandocfilters python3-parso python3-pexpect python3-platformdirs 188s python3-pluggy python3-prometheus-client python3-prompt-toolkit 188s python3-psutil python3-ptyprocess python3-pure-eval python3-py 188s python3-pydevd python3-pytest python3-requests-unixsocket python3-send2trash 188s python3-soupsieve python3-stack-data python3-terminado python3-tinycss2 188s python3-tornado python3-traitlets python3-typeshed python3-wcwidth 188s python3-webencodings python3-zmq sphinx-rtd-theme-common 188s Suggested packages: 188s gdb-doc gdbserver libjs-jquery-lazyload libjs-json libjs-jquery-ui-docs 188s fonts-mathjax-extras fonts-stix libjs-mathjax-doc python-argon2-doc 188s python-bleach-doc python-bytecode-doc python-coverage-doc 188s python-fastjsonschema-doc python3-genshi python3-lxml python-ipython-doc 188s python3-pip python-nbconvert-doc texlive-fonts-recommended 188s texlive-plain-generic texlive-xetex python-pexpect-doc subversion pydevd 188s python-terminado-doc python-tinycss2-doc python3-pycurl python-tornado-doc 188s python3-twisted 188s Recommended packages: 188s javascript-common python3-lxml python3-matplotlib pandoc python3-ipywidgets 188s The following NEW packages will be installed: 188s autopkgtest-satdep fonts-font-awesome fonts-glyphicons-halflings fonts-lato 188s fonts-mathjax gdb jupyter-core jupyter-notebook libbabeltrace1 libc6-dbg 188s libdebuginfod-common libdebuginfod1t64 libdw1t64 libjs-backbone 188s libjs-bootstrap libjs-bootstrap-tour libjs-codemirror libjs-es6-promise 188s libjs-jed libjs-jquery libjs-jquery-typeahead libjs-jquery-ui libjs-marked 188s libjs-mathjax libjs-moment libjs-requirejs libjs-requirejs-text 188s libjs-sphinxdoc libjs-text-encoding libjs-underscore libjs-xterm libnorm1t64 188s libpgm-5.3-0t64 libpython3.12t64 libsodium23 libsource-highlight-common 188s libsource-highlight4t64 libzmq5 node-jed python-notebook-doc 188s python-tinycss2-common python3-argon2 python3-asttokens python3-bleach 188s python3-bs4 python3-bytecode python3-comm python3-coverage python3-dateutil 188s python3-debugpy python3-decorator python3-defusedxml python3-entrypoints 188s python3-executing python3-fastjsonschema python3-html5lib python3-iniconfig 188s python3-ipykernel python3-ipython python3-ipython-genutils python3-jedi 188s python3-jupyter-client python3-jupyter-core python3-jupyterlab-pygments 188s python3-matplotlib-inline python3-mistune python3-nbclient python3-nbconvert 188s python3-nbformat python3-nest-asyncio python3-notebook python3-packaging 188s python3-pandocfilters python3-parso python3-pexpect python3-platformdirs 188s python3-pluggy python3-prometheus-client python3-prompt-toolkit 188s python3-psutil python3-ptyprocess python3-pure-eval python3-py 188s python3-pydevd python3-pytest python3-requests-unixsocket python3-send2trash 188s python3-soupsieve python3-stack-data python3-terminado python3-tinycss2 188s python3-tornado python3-traitlets python3-typeshed python3-wcwidth 188s python3-webencodings python3-zmq sphinx-rtd-theme-common 189s 0 upgraded, 98 newly installed, 0 to remove and 0 not upgraded. 189s Need to get 39.3 MB/39.3 MB of archives. 189s After this operation, 172 MB of additional disk space will be used. 189s Get:1 /tmp/autopkgtest.FXI16z/1-autopkgtest-satdep.deb autopkgtest-satdep armhf 0 [748 B] 189s Get:2 http://ftpmaster.internal/ubuntu oracular/main armhf fonts-lato all 2.015-1 [2781 kB] 189s Get:3 http://ftpmaster.internal/ubuntu oracular/main armhf libdebuginfod-common all 0.191-1 [14.6 kB] 189s Get:4 http://ftpmaster.internal/ubuntu oracular/main armhf fonts-font-awesome all 5.0.10+really4.7.0~dfsg-4.1 [516 kB] 189s Get:5 http://ftpmaster.internal/ubuntu oracular/universe armhf fonts-glyphicons-halflings all 1.009~3.4.1+dfsg-3 [118 kB] 189s Get:6 http://ftpmaster.internal/ubuntu oracular/main armhf fonts-mathjax all 2.7.9+dfsg-1 [2208 kB] 189s Get:7 http://ftpmaster.internal/ubuntu oracular/main armhf libdw1t64 armhf 0.191-1 [238 kB] 189s Get:8 http://ftpmaster.internal/ubuntu oracular/main armhf libbabeltrace1 armhf 1.5.11-3build3 [154 kB] 189s Get:9 http://ftpmaster.internal/ubuntu oracular/main armhf libdebuginfod1t64 armhf 0.191-1 [15.8 kB] 189s Get:10 http://ftpmaster.internal/ubuntu oracular/main armhf libpython3.12t64 armhf 3.12.4-1 [2059 kB] 189s Get:11 http://ftpmaster.internal/ubuntu oracular/main armhf libsource-highlight-common all 3.1.9-4.3build1 [64.2 kB] 189s Get:12 http://ftpmaster.internal/ubuntu oracular/main armhf libsource-highlight4t64 armhf 3.1.9-4.3build1 [306 kB] 189s Get:13 http://ftpmaster.internal/ubuntu oracular/main armhf libc6-dbg armhf 2.39-0ubuntu9 [6017 kB] 190s Get:14 http://ftpmaster.internal/ubuntu oracular/main armhf gdb armhf 15.0.50.20240403-0ubuntu1 [3852 kB] 190s Get:15 http://ftpmaster.internal/ubuntu oracular/main armhf python3-platformdirs all 4.2.1-1 [16.3 kB] 190s Get:16 http://ftpmaster.internal/ubuntu oracular-proposed/universe armhf python3-traitlets all 5.14.3-1 [71.3 kB] 190s Get:17 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-jupyter-core all 5.3.2-2 [25.5 kB] 190s Get:18 http://ftpmaster.internal/ubuntu oracular/universe armhf jupyter-core all 5.3.2-2 [4038 B] 190s Get:19 http://ftpmaster.internal/ubuntu oracular/main armhf libjs-underscore all 1.13.4~dfsg+~1.11.4-3 [118 kB] 190s Get:20 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-backbone all 1.4.1~dfsg+~1.4.15-3 [185 kB] 190s Get:21 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-bootstrap all 3.4.1+dfsg-3 [129 kB] 190s Get:22 http://ftpmaster.internal/ubuntu oracular/main armhf libjs-jquery all 3.6.1+dfsg+~3.5.14-1 [328 kB] 190s Get:23 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-bootstrap-tour all 0.12.0+dfsg-5 [21.4 kB] 190s Get:24 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-codemirror all 5.65.0+~cs5.83.9-3 [755 kB] 190s Get:25 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-es6-promise all 4.2.8-12 [14.1 kB] 190s Get:26 http://ftpmaster.internal/ubuntu oracular/universe armhf node-jed all 1.1.1-4 [15.2 kB] 190s Get:27 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-jed all 1.1.1-4 [2584 B] 190s Get:28 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-jquery-typeahead all 2.11.0+dfsg1-3 [48.9 kB] 190s Get:29 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-jquery-ui all 1.13.2+dfsg-1 [252 kB] 190s Get:30 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-marked all 4.2.3+ds+~4.0.7-3 [36.2 kB] 190s Get:31 http://ftpmaster.internal/ubuntu oracular/main armhf libjs-mathjax all 2.7.9+dfsg-1 [5665 kB] 190s Get:32 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-moment all 2.29.4+ds-1 [147 kB] 190s Get:33 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-requirejs all 2.3.6+ds+~2.1.37-1 [201 kB] 190s Get:34 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-requirejs-text all 2.0.12-1.1 [9056 B] 190s Get:35 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-text-encoding all 0.7.0-5 [140 kB] 190s Get:36 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-xterm all 5.3.0-2 [476 kB] 190s Get:37 http://ftpmaster.internal/ubuntu oracular/main armhf python3-ptyprocess all 0.7.0-5 [15.1 kB] 190s Get:38 http://ftpmaster.internal/ubuntu oracular/main armhf python3-tornado armhf 6.4.1-1 [298 kB] 190s Get:39 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-terminado all 0.18.1-1 [13.2 kB] 190s Get:40 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-argon2 armhf 21.1.0-2build1 [19.9 kB] 190s Get:41 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-comm all 0.2.1-1 [7016 B] 190s Get:42 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-bytecode all 0.15.1-3 [44.7 kB] 190s Get:43 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-coverage armhf 7.4.4+dfsg1-0ubuntu2 [146 kB] 190s Get:44 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-pydevd armhf 2.10.0+ds-10ubuntu1 [613 kB] 190s Get:45 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-debugpy all 1.8.0+ds-4ubuntu4 [67.6 kB] 190s Get:46 http://ftpmaster.internal/ubuntu oracular/main armhf python3-decorator all 5.1.1-5 [10.1 kB] 190s Get:47 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-parso all 0.8.3-1 [67.2 kB] 190s Get:48 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-typeshed all 0.0~git20231111.6764465-3 [1274 kB] 190s Get:49 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-jedi all 0.19.1+ds1-1 [693 kB] 190s Get:50 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-matplotlib-inline all 0.1.6-2 [8784 B] 190s Get:51 http://ftpmaster.internal/ubuntu oracular/main armhf python3-pexpect all 4.9-2 [48.1 kB] 190s Get:52 http://ftpmaster.internal/ubuntu oracular/main armhf python3-wcwidth all 0.2.5+dfsg1-1.1ubuntu1 [22.5 kB] 190s Get:53 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-prompt-toolkit all 3.0.46-1 [256 kB] 190s Get:54 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-asttokens all 2.4.1-1 [20.9 kB] 190s Get:55 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-executing all 2.0.1-0.1 [23.3 kB] 190s Get:56 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-pure-eval all 0.2.2-2 [11.1 kB] 190s Get:57 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-stack-data all 0.6.3-1 [22.0 kB] 190s Get:58 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-ipython all 8.20.0-1ubuntu1 [561 kB] 190s Get:59 http://ftpmaster.internal/ubuntu oracular/main armhf python3-dateutil all 2.9.0-2 [80.3 kB] 190s Get:60 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-entrypoints all 0.4-2 [7146 B] 190s Get:61 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-nest-asyncio all 1.5.4-1 [6256 B] 190s Get:62 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-py all 1.11.0-2 [72.7 kB] 190s Get:63 http://ftpmaster.internal/ubuntu oracular/universe armhf libnorm1t64 armhf 1.5.9+dfsg-3.1build1 [206 kB] 190s Get:64 http://ftpmaster.internal/ubuntu oracular/universe armhf libpgm-5.3-0t64 armhf 5.3.128~dfsg-2.1build1 [171 kB] 190s Get:65 http://ftpmaster.internal/ubuntu oracular/main armhf libsodium23 armhf 1.0.18-1build3 [139 kB] 190s Get:66 http://ftpmaster.internal/ubuntu oracular/universe armhf libzmq5 armhf 4.3.5-1build2 [262 kB] 190s Get:67 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-zmq armhf 24.0.1-5build1 [275 kB] 190s Get:68 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-jupyter-client all 7.4.9-2ubuntu1 [90.5 kB] 190s Get:69 http://ftpmaster.internal/ubuntu oracular/main armhf python3-packaging all 24.0-1 [41.1 kB] 190s Get:70 http://ftpmaster.internal/ubuntu oracular/main armhf python3-psutil armhf 5.9.8-2build2 [194 kB] 190s Get:71 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-ipykernel all 6.29.3-1ubuntu1 [82.6 kB] 190s Get:72 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-ipython-genutils all 0.2.0-6 [22.0 kB] 190s Get:73 http://ftpmaster.internal/ubuntu oracular/main armhf python3-webencodings all 0.5.1-5 [11.5 kB] 190s Get:74 http://ftpmaster.internal/ubuntu oracular/main armhf python3-html5lib all 1.1-6 [88.8 kB] 190s Get:75 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-bleach all 6.1.0-2 [49.6 kB] 190s Get:76 http://ftpmaster.internal/ubuntu oracular/main armhf python3-soupsieve all 2.5-1 [33.0 kB] 190s Get:77 http://ftpmaster.internal/ubuntu oracular/main armhf python3-bs4 all 4.12.3-1 [109 kB] 190s Get:78 http://ftpmaster.internal/ubuntu oracular/main armhf python3-defusedxml all 0.7.1-2 [42.0 kB] 190s Get:79 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-jupyterlab-pygments all 0.2.2-3 [6054 B] 190s Get:80 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-mistune all 3.0.2-1 [32.8 kB] 190s Get:81 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-fastjsonschema all 2.19.1-1 [19.7 kB] 190s Get:82 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-nbformat all 5.9.1-1 [41.2 kB] 190s Get:83 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-nbclient all 0.8.0-1 [55.6 kB] 190s Get:84 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-pandocfilters all 1.5.1-1 [23.6 kB] 190s Get:85 http://ftpmaster.internal/ubuntu oracular/universe armhf python-tinycss2-common all 1.3.0-1 [34.1 kB] 190s Get:86 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-tinycss2 all 1.3.0-1 [19.6 kB] 190s Get:87 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-nbconvert all 7.16.4-1 [156 kB] 191s Get:88 http://ftpmaster.internal/ubuntu oracular/main armhf python3-prometheus-client all 0.19.0+ds1-1 [41.7 kB] 191s Get:89 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-send2trash all 1.8.2-1 [15.5 kB] 191s Get:90 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-notebook all 6.4.12-2.2ubuntu1 [1566 kB] 191s Get:91 http://ftpmaster.internal/ubuntu oracular/universe armhf jupyter-notebook all 6.4.12-2.2ubuntu1 [10.4 kB] 191s Get:92 http://ftpmaster.internal/ubuntu oracular/main armhf libjs-sphinxdoc all 7.2.6-8 [150 kB] 191s Get:93 http://ftpmaster.internal/ubuntu oracular/main armhf sphinx-rtd-theme-common all 2.0.0+dfsg-1 [1012 kB] 191s Get:94 http://ftpmaster.internal/ubuntu oracular/universe armhf python-notebook-doc all 6.4.12-2.2ubuntu1 [2540 kB] 191s Get:95 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-iniconfig all 1.1.1-2 [6024 B] 191s Get:96 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-pluggy all 1.5.0-1 [21.0 kB] 191s Get:97 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-pytest all 7.4.4-1 [305 kB] 191s Get:98 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-requests-unixsocket all 0.3.0-4 [7274 B] 191s Preconfiguring packages ... 191s Fetched 39.3 MB in 2s (17.6 MB/s) 192s Selecting previously unselected package fonts-lato. 192s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58402 files and directories currently installed.) 192s Preparing to unpack .../00-fonts-lato_2.015-1_all.deb ... 192s Unpacking fonts-lato (2.015-1) ... 192s Selecting previously unselected package libdebuginfod-common. 192s Preparing to unpack .../01-libdebuginfod-common_0.191-1_all.deb ... 192s Unpacking libdebuginfod-common (0.191-1) ... 192s Selecting previously unselected package fonts-font-awesome. 192s Preparing to unpack .../02-fonts-font-awesome_5.0.10+really4.7.0~dfsg-4.1_all.deb ... 192s Unpacking fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 192s Selecting previously unselected package fonts-glyphicons-halflings. 192s Preparing to unpack .../03-fonts-glyphicons-halflings_1.009~3.4.1+dfsg-3_all.deb ... 192s Unpacking fonts-glyphicons-halflings (1.009~3.4.1+dfsg-3) ... 192s Selecting previously unselected package fonts-mathjax. 192s Preparing to unpack .../04-fonts-mathjax_2.7.9+dfsg-1_all.deb ... 192s Unpacking fonts-mathjax (2.7.9+dfsg-1) ... 192s Selecting previously unselected package libdw1t64:armhf. 192s Preparing to unpack .../05-libdw1t64_0.191-1_armhf.deb ... 192s Unpacking libdw1t64:armhf (0.191-1) ... 192s Selecting previously unselected package libbabeltrace1:armhf. 192s Preparing to unpack .../06-libbabeltrace1_1.5.11-3build3_armhf.deb ... 192s Unpacking libbabeltrace1:armhf (1.5.11-3build3) ... 192s Selecting previously unselected package libdebuginfod1t64:armhf. 192s Preparing to unpack .../07-libdebuginfod1t64_0.191-1_armhf.deb ... 192s Unpacking libdebuginfod1t64:armhf (0.191-1) ... 192s Selecting previously unselected package libpython3.12t64:armhf. 192s Preparing to unpack .../08-libpython3.12t64_3.12.4-1_armhf.deb ... 192s Unpacking libpython3.12t64:armhf (3.12.4-1) ... 192s Selecting previously unselected package libsource-highlight-common. 192s Preparing to unpack .../09-libsource-highlight-common_3.1.9-4.3build1_all.deb ... 192s Unpacking libsource-highlight-common (3.1.9-4.3build1) ... 193s Selecting previously unselected package libsource-highlight4t64:armhf. 193s Preparing to unpack .../10-libsource-highlight4t64_3.1.9-4.3build1_armhf.deb ... 193s Unpacking libsource-highlight4t64:armhf (3.1.9-4.3build1) ... 193s Selecting previously unselected package libc6-dbg:armhf. 193s Preparing to unpack .../11-libc6-dbg_2.39-0ubuntu9_armhf.deb ... 193s Unpacking libc6-dbg:armhf (2.39-0ubuntu9) ... 193s Selecting previously unselected package gdb. 193s Preparing to unpack .../12-gdb_15.0.50.20240403-0ubuntu1_armhf.deb ... 193s Unpacking gdb (15.0.50.20240403-0ubuntu1) ... 193s Selecting previously unselected package python3-platformdirs. 193s Preparing to unpack .../13-python3-platformdirs_4.2.1-1_all.deb ... 193s Unpacking python3-platformdirs (4.2.1-1) ... 193s Selecting previously unselected package python3-traitlets. 193s Preparing to unpack .../14-python3-traitlets_5.14.3-1_all.deb ... 193s Unpacking python3-traitlets (5.14.3-1) ... 193s Selecting previously unselected package python3-jupyter-core. 193s Preparing to unpack .../15-python3-jupyter-core_5.3.2-2_all.deb ... 193s Unpacking python3-jupyter-core (5.3.2-2) ... 193s Selecting previously unselected package jupyter-core. 193s Preparing to unpack .../16-jupyter-core_5.3.2-2_all.deb ... 193s Unpacking jupyter-core (5.3.2-2) ... 193s Selecting previously unselected package libjs-underscore. 193s Preparing to unpack .../17-libjs-underscore_1.13.4~dfsg+~1.11.4-3_all.deb ... 193s Unpacking libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 193s Selecting previously unselected package libjs-backbone. 193s Preparing to unpack .../18-libjs-backbone_1.4.1~dfsg+~1.4.15-3_all.deb ... 193s Unpacking libjs-backbone (1.4.1~dfsg+~1.4.15-3) ... 193s Selecting previously unselected package libjs-bootstrap. 193s Preparing to unpack .../19-libjs-bootstrap_3.4.1+dfsg-3_all.deb ... 193s Unpacking libjs-bootstrap (3.4.1+dfsg-3) ... 193s Selecting previously unselected package libjs-jquery. 193s Preparing to unpack .../20-libjs-jquery_3.6.1+dfsg+~3.5.14-1_all.deb ... 193s Unpacking libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 193s Selecting previously unselected package libjs-bootstrap-tour. 193s Preparing to unpack .../21-libjs-bootstrap-tour_0.12.0+dfsg-5_all.deb ... 193s Unpacking libjs-bootstrap-tour (0.12.0+dfsg-5) ... 193s Selecting previously unselected package libjs-codemirror. 193s Preparing to unpack .../22-libjs-codemirror_5.65.0+~cs5.83.9-3_all.deb ... 193s Unpacking libjs-codemirror (5.65.0+~cs5.83.9-3) ... 193s Selecting previously unselected package libjs-es6-promise. 193s Preparing to unpack .../23-libjs-es6-promise_4.2.8-12_all.deb ... 193s Unpacking libjs-es6-promise (4.2.8-12) ... 193s Selecting previously unselected package node-jed. 193s Preparing to unpack .../24-node-jed_1.1.1-4_all.deb ... 193s Unpacking node-jed (1.1.1-4) ... 193s Selecting previously unselected package libjs-jed. 193s Preparing to unpack .../25-libjs-jed_1.1.1-4_all.deb ... 193s Unpacking libjs-jed (1.1.1-4) ... 194s Selecting previously unselected package libjs-jquery-typeahead. 194s Preparing to unpack .../26-libjs-jquery-typeahead_2.11.0+dfsg1-3_all.deb ... 194s Unpacking libjs-jquery-typeahead (2.11.0+dfsg1-3) ... 194s Selecting previously unselected package libjs-jquery-ui. 194s Preparing to unpack .../27-libjs-jquery-ui_1.13.2+dfsg-1_all.deb ... 194s Unpacking libjs-jquery-ui (1.13.2+dfsg-1) ... 194s Selecting previously unselected package libjs-marked. 194s Preparing to unpack .../28-libjs-marked_4.2.3+ds+~4.0.7-3_all.deb ... 194s Unpacking libjs-marked (4.2.3+ds+~4.0.7-3) ... 194s Selecting previously unselected package libjs-mathjax. 194s Preparing to unpack .../29-libjs-mathjax_2.7.9+dfsg-1_all.deb ... 194s Unpacking libjs-mathjax (2.7.9+dfsg-1) ... 194s Selecting previously unselected package libjs-moment. 194s Preparing to unpack .../30-libjs-moment_2.29.4+ds-1_all.deb ... 194s Unpacking libjs-moment (2.29.4+ds-1) ... 195s Selecting previously unselected package libjs-requirejs. 195s Preparing to unpack .../31-libjs-requirejs_2.3.6+ds+~2.1.37-1_all.deb ... 195s Unpacking libjs-requirejs (2.3.6+ds+~2.1.37-1) ... 195s Selecting previously unselected package libjs-requirejs-text. 195s Preparing to unpack .../32-libjs-requirejs-text_2.0.12-1.1_all.deb ... 195s Unpacking libjs-requirejs-text (2.0.12-1.1) ... 195s Selecting previously unselected package libjs-text-encoding. 195s Preparing to unpack .../33-libjs-text-encoding_0.7.0-5_all.deb ... 195s Unpacking libjs-text-encoding (0.7.0-5) ... 195s Selecting previously unselected package libjs-xterm. 195s Preparing to unpack .../34-libjs-xterm_5.3.0-2_all.deb ... 195s Unpacking libjs-xterm (5.3.0-2) ... 195s Selecting previously unselected package python3-ptyprocess. 195s Preparing to unpack .../35-python3-ptyprocess_0.7.0-5_all.deb ... 195s Unpacking python3-ptyprocess (0.7.0-5) ... 195s Selecting previously unselected package python3-tornado. 195s Preparing to unpack .../36-python3-tornado_6.4.1-1_armhf.deb ... 195s Unpacking python3-tornado (6.4.1-1) ... 195s Selecting previously unselected package python3-terminado. 195s Preparing to unpack .../37-python3-terminado_0.18.1-1_all.deb ... 195s Unpacking python3-terminado (0.18.1-1) ... 195s Selecting previously unselected package python3-argon2. 195s Preparing to unpack .../38-python3-argon2_21.1.0-2build1_armhf.deb ... 195s Unpacking python3-argon2 (21.1.0-2build1) ... 195s Selecting previously unselected package python3-comm. 195s Preparing to unpack .../39-python3-comm_0.2.1-1_all.deb ... 195s Unpacking python3-comm (0.2.1-1) ... 195s Selecting previously unselected package python3-bytecode. 195s Preparing to unpack .../40-python3-bytecode_0.15.1-3_all.deb ... 195s Unpacking python3-bytecode (0.15.1-3) ... 195s Selecting previously unselected package python3-coverage. 195s Preparing to unpack .../41-python3-coverage_7.4.4+dfsg1-0ubuntu2_armhf.deb ... 195s Unpacking python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 195s Selecting previously unselected package python3-pydevd. 195s Preparing to unpack .../42-python3-pydevd_2.10.0+ds-10ubuntu1_armhf.deb ... 195s Unpacking python3-pydevd (2.10.0+ds-10ubuntu1) ... 195s Selecting previously unselected package python3-debugpy. 195s Preparing to unpack .../43-python3-debugpy_1.8.0+ds-4ubuntu4_all.deb ... 195s Unpacking python3-debugpy (1.8.0+ds-4ubuntu4) ... 195s Selecting previously unselected package python3-decorator. 195s Preparing to unpack .../44-python3-decorator_5.1.1-5_all.deb ... 195s Unpacking python3-decorator (5.1.1-5) ... 195s Selecting previously unselected package python3-parso. 195s Preparing to unpack .../45-python3-parso_0.8.3-1_all.deb ... 195s Unpacking python3-parso (0.8.3-1) ... 195s Selecting previously unselected package python3-typeshed. 195s Preparing to unpack .../46-python3-typeshed_0.0~git20231111.6764465-3_all.deb ... 195s Unpacking python3-typeshed (0.0~git20231111.6764465-3) ... 196s Selecting previously unselected package python3-jedi. 196s Preparing to unpack .../47-python3-jedi_0.19.1+ds1-1_all.deb ... 196s Unpacking python3-jedi (0.19.1+ds1-1) ... 197s Selecting previously unselected package python3-matplotlib-inline. 197s Preparing to unpack .../48-python3-matplotlib-inline_0.1.6-2_all.deb ... 197s Unpacking python3-matplotlib-inline (0.1.6-2) ... 197s Selecting previously unselected package python3-pexpect. 197s Preparing to unpack .../49-python3-pexpect_4.9-2_all.deb ... 197s Unpacking python3-pexpect (4.9-2) ... 197s Selecting previously unselected package python3-wcwidth. 197s Preparing to unpack .../50-python3-wcwidth_0.2.5+dfsg1-1.1ubuntu1_all.deb ... 197s Unpacking python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 197s Selecting previously unselected package python3-prompt-toolkit. 197s Preparing to unpack .../51-python3-prompt-toolkit_3.0.46-1_all.deb ... 197s Unpacking python3-prompt-toolkit (3.0.46-1) ... 197s Selecting previously unselected package python3-asttokens. 197s Preparing to unpack .../52-python3-asttokens_2.4.1-1_all.deb ... 197s Unpacking python3-asttokens (2.4.1-1) ... 197s Selecting previously unselected package python3-executing. 197s Preparing to unpack .../53-python3-executing_2.0.1-0.1_all.deb ... 197s Unpacking python3-executing (2.0.1-0.1) ... 197s Selecting previously unselected package python3-pure-eval. 197s Preparing to unpack .../54-python3-pure-eval_0.2.2-2_all.deb ... 197s Unpacking python3-pure-eval (0.2.2-2) ... 197s Selecting previously unselected package python3-stack-data. 197s Preparing to unpack .../55-python3-stack-data_0.6.3-1_all.deb ... 197s Unpacking python3-stack-data (0.6.3-1) ... 197s Selecting previously unselected package python3-ipython. 197s Preparing to unpack .../56-python3-ipython_8.20.0-1ubuntu1_all.deb ... 197s Unpacking python3-ipython (8.20.0-1ubuntu1) ... 197s Selecting previously unselected package python3-dateutil. 197s Preparing to unpack .../57-python3-dateutil_2.9.0-2_all.deb ... 197s Unpacking python3-dateutil (2.9.0-2) ... 197s Selecting previously unselected package python3-entrypoints. 197s Preparing to unpack .../58-python3-entrypoints_0.4-2_all.deb ... 197s Unpacking python3-entrypoints (0.4-2) ... 197s Selecting previously unselected package python3-nest-asyncio. 197s Preparing to unpack .../59-python3-nest-asyncio_1.5.4-1_all.deb ... 197s Unpacking python3-nest-asyncio (1.5.4-1) ... 197s Selecting previously unselected package python3-py. 197s Preparing to unpack .../60-python3-py_1.11.0-2_all.deb ... 197s Unpacking python3-py (1.11.0-2) ... 197s Selecting previously unselected package libnorm1t64:armhf. 197s Preparing to unpack .../61-libnorm1t64_1.5.9+dfsg-3.1build1_armhf.deb ... 197s Unpacking libnorm1t64:armhf (1.5.9+dfsg-3.1build1) ... 197s Selecting previously unselected package libpgm-5.3-0t64:armhf. 197s Preparing to unpack .../62-libpgm-5.3-0t64_5.3.128~dfsg-2.1build1_armhf.deb ... 197s Unpacking libpgm-5.3-0t64:armhf (5.3.128~dfsg-2.1build1) ... 197s Selecting previously unselected package libsodium23:armhf. 197s Preparing to unpack .../63-libsodium23_1.0.18-1build3_armhf.deb ... 197s Unpacking libsodium23:armhf (1.0.18-1build3) ... 197s Selecting previously unselected package libzmq5:armhf. 197s Preparing to unpack .../64-libzmq5_4.3.5-1build2_armhf.deb ... 197s Unpacking libzmq5:armhf (4.3.5-1build2) ... 197s Selecting previously unselected package python3-zmq. 197s Preparing to unpack .../65-python3-zmq_24.0.1-5build1_armhf.deb ... 197s Unpacking python3-zmq (24.0.1-5build1) ... 197s Selecting previously unselected package python3-jupyter-client. 197s Preparing to unpack .../66-python3-jupyter-client_7.4.9-2ubuntu1_all.deb ... 197s Unpacking python3-jupyter-client (7.4.9-2ubuntu1) ... 197s Selecting previously unselected package python3-packaging. 197s Preparing to unpack .../67-python3-packaging_24.0-1_all.deb ... 197s Unpacking python3-packaging (24.0-1) ... 198s Selecting previously unselected package python3-psutil. 198s Preparing to unpack .../68-python3-psutil_5.9.8-2build2_armhf.deb ... 198s Unpacking python3-psutil (5.9.8-2build2) ... 198s Selecting previously unselected package python3-ipykernel. 198s Preparing to unpack .../69-python3-ipykernel_6.29.3-1ubuntu1_all.deb ... 198s Unpacking python3-ipykernel (6.29.3-1ubuntu1) ... 198s Selecting previously unselected package python3-ipython-genutils. 198s Preparing to unpack .../70-python3-ipython-genutils_0.2.0-6_all.deb ... 198s Unpacking python3-ipython-genutils (0.2.0-6) ... 198s Selecting previously unselected package python3-webencodings. 198s Preparing to unpack .../71-python3-webencodings_0.5.1-5_all.deb ... 198s Unpacking python3-webencodings (0.5.1-5) ... 198s Selecting previously unselected package python3-html5lib. 198s Preparing to unpack .../72-python3-html5lib_1.1-6_all.deb ... 198s Unpacking python3-html5lib (1.1-6) ... 198s Selecting previously unselected package python3-bleach. 198s Preparing to unpack .../73-python3-bleach_6.1.0-2_all.deb ... 198s Unpacking python3-bleach (6.1.0-2) ... 198s Selecting previously unselected package python3-soupsieve. 198s Preparing to unpack .../74-python3-soupsieve_2.5-1_all.deb ... 198s Unpacking python3-soupsieve (2.5-1) ... 198s Selecting previously unselected package python3-bs4. 198s Preparing to unpack .../75-python3-bs4_4.12.3-1_all.deb ... 198s Unpacking python3-bs4 (4.12.3-1) ... 198s Selecting previously unselected package python3-defusedxml. 198s Preparing to unpack .../76-python3-defusedxml_0.7.1-2_all.deb ... 198s Unpacking python3-defusedxml (0.7.1-2) ... 198s Selecting previously unselected package python3-jupyterlab-pygments. 198s Preparing to unpack .../77-python3-jupyterlab-pygments_0.2.2-3_all.deb ... 198s Unpacking python3-jupyterlab-pygments (0.2.2-3) ... 198s Selecting previously unselected package python3-mistune. 198s Preparing to unpack .../78-python3-mistune_3.0.2-1_all.deb ... 198s Unpacking python3-mistune (3.0.2-1) ... 198s Selecting previously unselected package python3-fastjsonschema. 198s Preparing to unpack .../79-python3-fastjsonschema_2.19.1-1_all.deb ... 198s Unpacking python3-fastjsonschema (2.19.1-1) ... 198s Selecting previously unselected package python3-nbformat. 198s Preparing to unpack .../80-python3-nbformat_5.9.1-1_all.deb ... 198s Unpacking python3-nbformat (5.9.1-1) ... 198s Selecting previously unselected package python3-nbclient. 198s Preparing to unpack .../81-python3-nbclient_0.8.0-1_all.deb ... 198s Unpacking python3-nbclient (0.8.0-1) ... 198s Selecting previously unselected package python3-pandocfilters. 198s Preparing to unpack .../82-python3-pandocfilters_1.5.1-1_all.deb ... 198s Unpacking python3-pandocfilters (1.5.1-1) ... 198s Selecting previously unselected package python-tinycss2-common. 198s Preparing to unpack .../83-python-tinycss2-common_1.3.0-1_all.deb ... 198s Unpacking python-tinycss2-common (1.3.0-1) ... 198s Selecting previously unselected package python3-tinycss2. 198s Preparing to unpack .../84-python3-tinycss2_1.3.0-1_all.deb ... 198s Unpacking python3-tinycss2 (1.3.0-1) ... 198s Selecting previously unselected package python3-nbconvert. 198s Preparing to unpack .../85-python3-nbconvert_7.16.4-1_all.deb ... 198s Unpacking python3-nbconvert (7.16.4-1) ... 198s Selecting previously unselected package python3-prometheus-client. 198s Preparing to unpack .../86-python3-prometheus-client_0.19.0+ds1-1_all.deb ... 198s Unpacking python3-prometheus-client (0.19.0+ds1-1) ... 198s Selecting previously unselected package python3-send2trash. 198s Preparing to unpack .../87-python3-send2trash_1.8.2-1_all.deb ... 198s Unpacking python3-send2trash (1.8.2-1) ... 199s Selecting previously unselected package python3-notebook. 199s Preparing to unpack .../88-python3-notebook_6.4.12-2.2ubuntu1_all.deb ... 199s Unpacking python3-notebook (6.4.12-2.2ubuntu1) ... 199s Selecting previously unselected package jupyter-notebook. 199s Preparing to unpack .../89-jupyter-notebook_6.4.12-2.2ubuntu1_all.deb ... 199s Unpacking jupyter-notebook (6.4.12-2.2ubuntu1) ... 199s Selecting previously unselected package libjs-sphinxdoc. 199s Preparing to unpack .../90-libjs-sphinxdoc_7.2.6-8_all.deb ... 199s Unpacking libjs-sphinxdoc (7.2.6-8) ... 199s Selecting previously unselected package sphinx-rtd-theme-common. 199s Preparing to unpack .../91-sphinx-rtd-theme-common_2.0.0+dfsg-1_all.deb ... 199s Unpacking sphinx-rtd-theme-common (2.0.0+dfsg-1) ... 199s Selecting previously unselected package python-notebook-doc. 199s Preparing to unpack .../92-python-notebook-doc_6.4.12-2.2ubuntu1_all.deb ... 199s Unpacking python-notebook-doc (6.4.12-2.2ubuntu1) ... 199s Selecting previously unselected package python3-iniconfig. 199s Preparing to unpack .../93-python3-iniconfig_1.1.1-2_all.deb ... 199s Unpacking python3-iniconfig (1.1.1-2) ... 199s Selecting previously unselected package python3-pluggy. 199s Preparing to unpack .../94-python3-pluggy_1.5.0-1_all.deb ... 199s Unpacking python3-pluggy (1.5.0-1) ... 199s Selecting previously unselected package python3-pytest. 199s Preparing to unpack .../95-python3-pytest_7.4.4-1_all.deb ... 199s Unpacking python3-pytest (7.4.4-1) ... 199s Selecting previously unselected package python3-requests-unixsocket. 199s Preparing to unpack .../96-python3-requests-unixsocket_0.3.0-4_all.deb ... 199s Unpacking python3-requests-unixsocket (0.3.0-4) ... 199s Selecting previously unselected package autopkgtest-satdep. 199s Preparing to unpack .../97-1-autopkgtest-satdep.deb ... 199s Unpacking autopkgtest-satdep (0) ... 199s Setting up python3-entrypoints (0.4-2) ... 199s Setting up libjs-jquery-typeahead (2.11.0+dfsg1-3) ... 199s Setting up python3-iniconfig (1.1.1-2) ... 199s Setting up python3-tornado (6.4.1-1) ... 200s Setting up libnorm1t64:armhf (1.5.9+dfsg-3.1build1) ... 200s Setting up python3-pure-eval (0.2.2-2) ... 200s Setting up python3-send2trash (1.8.2-1) ... 200s Setting up fonts-lato (2.015-1) ... 200s Setting up fonts-mathjax (2.7.9+dfsg-1) ... 200s Setting up libsodium23:armhf (1.0.18-1build3) ... 200s Setting up libjs-mathjax (2.7.9+dfsg-1) ... 200s Setting up python3-py (1.11.0-2) ... 200s Setting up libdebuginfod-common (0.191-1) ... 201s Setting up libjs-requirejs-text (2.0.12-1.1) ... 201s Setting up python3-parso (0.8.3-1) ... 201s Setting up python3-defusedxml (0.7.1-2) ... 201s Setting up python3-ipython-genutils (0.2.0-6) ... 201s Setting up python3-asttokens (2.4.1-1) ... 201s Setting up fonts-glyphicons-halflings (1.009~3.4.1+dfsg-3) ... 201s Setting up python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 202s Setting up libjs-moment (2.29.4+ds-1) ... 202s Setting up python3-pandocfilters (1.5.1-1) ... 202s Setting up libjs-requirejs (2.3.6+ds+~2.1.37-1) ... 202s Setting up libjs-es6-promise (4.2.8-12) ... 202s Setting up libjs-text-encoding (0.7.0-5) ... 202s Setting up python3-webencodings (0.5.1-5) ... 202s Setting up python3-platformdirs (4.2.1-1) ... 202s Setting up python3-psutil (5.9.8-2build2) ... 202s Setting up libsource-highlight-common (3.1.9-4.3build1) ... 202s Setting up libc6-dbg:armhf (2.39-0ubuntu9) ... 202s Setting up libdw1t64:armhf (0.191-1) ... 202s Setting up python3-requests-unixsocket (0.3.0-4) ... 202s Setting up python3-jupyterlab-pygments (0.2.2-3) ... 203s Setting up libpython3.12t64:armhf (3.12.4-1) ... 203s Setting up libpgm-5.3-0t64:armhf (5.3.128~dfsg-2.1build1) ... 203s Setting up python3-decorator (5.1.1-5) ... 203s Setting up python3-packaging (24.0-1) ... 203s Setting up python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 203s Setting up node-jed (1.1.1-4) ... 203s Setting up python3-typeshed (0.0~git20231111.6764465-3) ... 203s Setting up python3-executing (2.0.1-0.1) ... 203s Setting up libjs-xterm (5.3.0-2) ... 203s Setting up python3-nest-asyncio (1.5.4-1) ... 204s Setting up python3-bytecode (0.15.1-3) ... 204s Setting up libjs-codemirror (5.65.0+~cs5.83.9-3) ... 204s Setting up libjs-jed (1.1.1-4) ... 204s Setting up python3-html5lib (1.1-6) ... 204s Setting up libbabeltrace1:armhf (1.5.11-3build3) ... 204s Setting up python3-pluggy (1.5.0-1) ... 204s Setting up python3-fastjsonschema (2.19.1-1) ... 204s Setting up python3-traitlets (5.14.3-1) ... 204s Setting up python-tinycss2-common (1.3.0-1) ... 204s Setting up python3-argon2 (21.1.0-2build1) ... 205s Setting up python3-dateutil (2.9.0-2) ... 205s Setting up libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 205s Setting up python3-mistune (3.0.2-1) ... 205s Setting up python3-stack-data (0.6.3-1) ... 205s Setting up python3-soupsieve (2.5-1) ... 205s Setting up fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 205s Setting up sphinx-rtd-theme-common (2.0.0+dfsg-1) ... 205s Setting up python3-jupyter-core (5.3.2-2) ... 205s Setting up libjs-bootstrap (3.4.1+dfsg-3) ... 205s Setting up libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 205s Setting up python3-ptyprocess (0.7.0-5) ... 206s Setting up libjs-marked (4.2.3+ds+~4.0.7-3) ... 206s Setting up python3-prompt-toolkit (3.0.46-1) ... 206s Setting up libdebuginfod1t64:armhf (0.191-1) ... 206s Setting up python3-tinycss2 (1.3.0-1) ... 206s Setting up libzmq5:armhf (4.3.5-1build2) ... 206s Setting up python3-jedi (0.19.1+ds1-1) ... 206s Setting up python3-pytest (7.4.4-1) ... 207s Setting up libjs-bootstrap-tour (0.12.0+dfsg-5) ... 207s Setting up libjs-backbone (1.4.1~dfsg+~1.4.15-3) ... 207s Setting up libsource-highlight4t64:armhf (3.1.9-4.3build1) ... 207s Setting up python3-nbformat (5.9.1-1) ... 207s Setting up python3-bs4 (4.12.3-1) ... 207s Setting up python3-bleach (6.1.0-2) ... 207s Setting up python3-matplotlib-inline (0.1.6-2) ... 207s Setting up python3-comm (0.2.1-1) ... 208s Setting up python3-prometheus-client (0.19.0+ds1-1) ... 208s Setting up gdb (15.0.50.20240403-0ubuntu1) ... 208s Setting up libjs-jquery-ui (1.13.2+dfsg-1) ... 208s Setting up python3-pexpect (4.9-2) ... 208s Setting up python3-zmq (24.0.1-5build1) ... 208s Setting up libjs-sphinxdoc (7.2.6-8) ... 208s Setting up python3-terminado (0.18.1-1) ... 208s Setting up python3-jupyter-client (7.4.9-2ubuntu1) ... 209s Setting up jupyter-core (5.3.2-2) ... 209s Setting up python3-pydevd (2.10.0+ds-10ubuntu1) ... 209s Setting up python3-debugpy (1.8.0+ds-4ubuntu4) ... 209s Setting up python-notebook-doc (6.4.12-2.2ubuntu1) ... 209s Setting up python3-nbclient (0.8.0-1) ... 210s Setting up python3-ipython (8.20.0-1ubuntu1) ... 210s Setting up python3-ipykernel (6.29.3-1ubuntu1) ... 210s Setting up python3-nbconvert (7.16.4-1) ... 211s Setting up python3-notebook (6.4.12-2.2ubuntu1) ... 211s Setting up jupyter-notebook (6.4.12-2.2ubuntu1) ... 211s Setting up autopkgtest-satdep (0) ... 211s Processing triggers for man-db (2.12.1-2) ... 212s Processing triggers for libc-bin (2.39-0ubuntu9) ... 236s (Reading database ... 75468 files and directories currently installed.) 236s Removing autopkgtest-satdep (0) ... 241s autopkgtest [10:31:20]: test pytest: [----------------------- 245s ============================= test session starts ============================== 245s platform linux -- Python 3.12.4, pytest-7.4.4, pluggy-1.5.0 245s rootdir: /tmp/autopkgtest.FXI16z/build.fGX/src 245s collected 330 items / 5 deselected / 325 selected 245s 245s notebook/auth/tests/test_login.py EE [ 0%] 246s notebook/auth/tests/test_security.py .... [ 1%] 247s notebook/bundler/tests/test_bundler_api.py EEEEE [ 3%] 247s notebook/bundler/tests/test_bundler_tools.py ............. [ 7%] 247s notebook/bundler/tests/test_bundlerextension.py ... [ 8%] 248s notebook/nbconvert/tests/test_nbconvert_handlers.py ssssss [ 10%] 248s notebook/services/api/tests/test_api.py EEE [ 11%] 248s notebook/services/config/tests/test_config_api.py EEE [ 12%] 251s notebook/services/contents/tests/test_contents_api.py EsEEEEEEEEEEssEEsE [ 17%] 260s EEEEEEEEEEEEEEEEEEEEEEEEEsEEEEEEEEEEEssEEsEEEEEEEEEEEEEEEEEEEEEEEEE [ 38%] 260s notebook/services/contents/tests/test_fileio.py ... [ 39%] 260s notebook/services/contents/tests/test_largefilemanager.py . [ 39%] 261s notebook/services/contents/tests/test_manager.py .....s........ss....... [ 46%] 261s ...ss........ [ 50%] 263s notebook/services/kernels/tests/test_kernels_api.py EEEEEEEEEEEE [ 54%] 264s notebook/services/kernelspecs/tests/test_kernelspecs_api.py EEEEEEE [ 56%] 264s notebook/services/nbconvert/tests/test_nbconvert_api.py E [ 56%] 266s notebook/services/sessions/tests/test_sessionmanager.py FFFFFFFFF [ 59%] 269s notebook/services/sessions/tests/test_sessions_api.py EEEEEEEEEEEEEEEEEE [ 64%] 269s EEEE [ 66%] 271s notebook/terminal/tests/test_terminals_api.py EEEEEEEE [ 68%] 271s notebook/tests/test_config_manager.py . [ 68%] 272s notebook/tests/test_files.py EEEEE [ 70%] 272s notebook/tests/test_gateway.py EEEEEE [ 72%] 272s notebook/tests/test_i18n.py . [ 72%] 272s notebook/tests/test_log.py . [ 72%] 274s notebook/tests/test_nbextensions.py ................................... [ 83%] 277s notebook/tests/test_notebookapp.py FFFFFFFFF........F.EEEEEEE [ 91%] 278s notebook/tests/test_paths.py ..E [ 92%] 278s notebook/tests/test_serialize.py .. [ 93%] 279s notebook/tests/test_serverextensions.py ...FF [ 94%] 279s notebook/tests/test_traittypes.py ........... [ 98%] 280s notebook/tests/test_utils.py F...s [ 99%] 280s notebook/tree/tests/test_tree_handler.py E [100%] 280s 280s ==================================== ERRORS ==================================== 280s __________________ ERROR at setup of LoginTest.test_next_bad ___________________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s ___________________ ERROR at setup of LoginTest.test_next_ok ___________________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s __________ ERROR at setup of BundleAPITest.test_bundler_import_error ___________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s """Make a test notebook. Borrowed from nbconvert test. Assumes the class 280s teardown will clean it up in the end.""" 280s > super().setup_class() 280s 280s notebook/bundler/tests/test_bundler_api.py:27: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:198: in setup_class 280s cls.wait_until_alive() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s _____________ ERROR at setup of BundleAPITest.test_bundler_invoke ______________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s """Make a test notebook. Borrowed from nbconvert test. Assumes the class 280s teardown will clean it up in the end.""" 280s > super().setup_class() 280s 280s notebook/bundler/tests/test_bundler_api.py:27: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:198: in setup_class 280s cls.wait_until_alive() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s ___________ ERROR at setup of BundleAPITest.test_bundler_not_enabled ___________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s """Make a test notebook. Borrowed from nbconvert test. Assumes the class 280s teardown will clean it up in the end.""" 280s > super().setup_class() 280s 280s notebook/bundler/tests/test_bundler_api.py:27: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:198: in setup_class 280s cls.wait_until_alive() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s ___________ ERROR at setup of BundleAPITest.test_missing_bundler_arg ___________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s """Make a test notebook. Borrowed from nbconvert test. Assumes the class 280s teardown will clean it up in the end.""" 280s > super().setup_class() 280s 280s notebook/bundler/tests/test_bundler_api.py:27: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:198: in setup_class 280s cls.wait_until_alive() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s ___________ ERROR at setup of BundleAPITest.test_notebook_not_found ____________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s """Make a test notebook. Borrowed from nbconvert test. Assumes the class 280s teardown will clean it up in the end.""" 280s > super().setup_class() 280s 280s notebook/bundler/tests/test_bundler_api.py:27: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:198: in setup_class 280s cls.wait_until_alive() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s ___________________ ERROR at setup of APITest.test_get_spec ____________________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s __________________ ERROR at setup of APITest.test_get_status ___________________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s _______________ ERROR at setup of APITest.test_no_track_activity _______________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s ____________ ERROR at setup of APITest.test_create_retrieve_config _____________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s __________________ ERROR at setup of APITest.test_get_unknown __________________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s ____________________ ERROR at setup of APITest.test_modify _____________________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s __________________ ERROR at setup of APITest.test_checkpoints __________________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s ___________ ERROR at setup of APITest.test_checkpoints_separate_root ___________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s _____________________ ERROR at setup of APITest.test_copy ______________________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s ________________ ERROR at setup of APITest.test_copy_400_hidden ________________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s ___________________ ERROR at setup of APITest.test_copy_copy ___________________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s _________________ ERROR at setup of APITest.test_copy_dir_400 __________________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s ___________________ ERROR at setup of APITest.test_copy_path ___________________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s _________________ ERROR at setup of APITest.test_copy_put_400 __________________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s ______________ ERROR at setup of APITest.test_copy_put_400_hidden ______________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s ________________ ERROR at setup of APITest.test_create_untitled ________________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s ______________ ERROR at setup of APITest.test_create_untitled_txt ______________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s _______________ ERROR at setup of APITest.test_delete_hidden_dir _______________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s ______________ ERROR at setup of APITest.test_delete_hidden_file _______________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 280s 280s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 280s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 280s # The original create_connection function always returns all records. 280s family = allowed_gai_family() 280s 280s try: 280s host.encode("idna") 280s except UnicodeError: 280s raise LocationParseError(f"'{host}', label empty or too long") from None 280s 280s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 280s af, socktype, proto, canonname, sa = res 280s sock = None 280s try: 280s sock = socket.socket(af, socktype, proto) 280s 280s # If provided, set socket level options before connecting. 280s _set_socket_options(sock, socket_options) 280s 280s if timeout is not _DEFAULT_TIMEOUT: 280s sock.settimeout(timeout) 280s if source_address: 280s sock.bind(source_address) 280s > sock.connect(sa) 280s E ConnectionRefusedError: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s method = 'GET', url = '/a%40b/api/contents', body = None 280s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 280s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s redirect = False, assert_same_host = False 280s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 280s release_conn = False, chunked = False, body_pos = None, preload_content = False 280s decode_content = False, response_kw = {} 280s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 280s destination_scheme = None, conn = None, release_this_conn = True 280s http_tunnel_required = False, err = None, clean_exit = False 280s 280s def urlopen( # type: ignore[override] 280s self, 280s method: str, 280s url: str, 280s body: _TYPE_BODY | None = None, 280s headers: typing.Mapping[str, str] | None = None, 280s retries: Retry | bool | int | None = None, 280s redirect: bool = True, 280s assert_same_host: bool = True, 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s pool_timeout: int | None = None, 280s release_conn: bool | None = None, 280s chunked: bool = False, 280s body_pos: _TYPE_BODY_POSITION | None = None, 280s preload_content: bool = True, 280s decode_content: bool = True, 280s **response_kw: typing.Any, 280s ) -> BaseHTTPResponse: 280s """ 280s Get a connection from the pool and perform an HTTP request. This is the 280s lowest level call for making a request, so you'll need to specify all 280s the raw details. 280s 280s .. note:: 280s 280s More commonly, it's appropriate to use a convenience method 280s such as :meth:`request`. 280s 280s .. note:: 280s 280s `release_conn` will only behave as expected if 280s `preload_content=False` because we want to make 280s `preload_content=False` the default behaviour someday soon without 280s breaking backwards compatibility. 280s 280s :param method: 280s HTTP request method (such as GET, POST, PUT, etc.) 280s 280s :param url: 280s The URL to perform the request on. 280s 280s :param body: 280s Data to send in the request body, either :class:`str`, :class:`bytes`, 280s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 280s 280s :param headers: 280s Dictionary of custom headers to send, such as User-Agent, 280s If-None-Match, etc. If None, pool headers are used. If provided, 280s these headers completely replace any pool-specific headers. 280s 280s :param retries: 280s Configure the number of retries to allow before raising a 280s :class:`~urllib3.exceptions.MaxRetryError` exception. 280s 280s Pass ``None`` to retry until you receive a response. Pass a 280s :class:`~urllib3.util.retry.Retry` object for fine-grained control 280s over different types of retries. 280s Pass an integer number to retry connection errors that many times, 280s but no other types of errors. Pass zero to never retry. 280s 280s If ``False``, then retries are disabled and any exception is raised 280s immediately. Also, instead of raising a MaxRetryError on redirects, 280s the redirect response will be returned. 280s 280s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 280s 280s :param redirect: 280s If True, automatically handle redirects (status codes 301, 302, 280s 303, 307, 308). Each redirect counts as a retry. Disabling retries 280s will disable redirect, too. 280s 280s :param assert_same_host: 280s If ``True``, will make sure that the host of the pool requests is 280s consistent else will raise HostChangedError. When ``False``, you can 280s use the pool on an HTTP proxy and request foreign hosts. 280s 280s :param timeout: 280s If specified, overrides the default timeout for this one 280s request. It may be a float (in seconds) or an instance of 280s :class:`urllib3.util.Timeout`. 280s 280s :param pool_timeout: 280s If set and the pool is set to block=True, then this method will 280s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 280s connection is available within the time period. 280s 280s :param bool preload_content: 280s If True, the response's body will be preloaded into memory. 280s 280s :param bool decode_content: 280s If True, will attempt to decode the body based on the 280s 'content-encoding' header. 280s 280s :param release_conn: 280s If False, then the urlopen call will not release the connection 280s back into the pool once a response is received (but will release if 280s you read the entire contents of the response such as when 280s `preload_content=True`). This is useful if you're not preloading 280s the response's content immediately. You will need to call 280s ``r.release_conn()`` on the response ``r`` to return the connection 280s back into the pool. If None, it takes the value of ``preload_content`` 280s which defaults to ``True``. 280s 280s :param bool chunked: 280s If True, urllib3 will send the body using chunked transfer 280s encoding. Otherwise, urllib3 will send the body using the standard 280s content-length form. Defaults to False. 280s 280s :param int body_pos: 280s Position to seek to in file-like body in the event of a retry or 280s redirect. Typically this won't need to be set because urllib3 will 280s auto-populate the value when needed. 280s """ 280s parsed_url = parse_url(url) 280s destination_scheme = parsed_url.scheme 280s 280s if headers is None: 280s headers = self.headers 280s 280s if not isinstance(retries, Retry): 280s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 280s 280s if release_conn is None: 280s release_conn = preload_content 280s 280s # Check host 280s if assert_same_host and not self.is_same_host(url): 280s raise HostChangedError(self, url, retries) 280s 280s # Ensure that the URL we're connecting to is properly encoded 280s if url.startswith("/"): 280s url = to_str(_encode_target(url)) 280s else: 280s url = to_str(parsed_url.url) 280s 280s conn = None 280s 280s # Track whether `conn` needs to be released before 280s # returning/raising/recursing. Update this variable if necessary, and 280s # leave `release_conn` constant throughout the function. That way, if 280s # the function recurses, the original value of `release_conn` will be 280s # passed down into the recursive call, and its value will be respected. 280s # 280s # See issue #651 [1] for details. 280s # 280s # [1] 280s release_this_conn = release_conn 280s 280s http_tunnel_required = connection_requires_http_tunnel( 280s self.proxy, self.proxy_config, destination_scheme 280s ) 280s 280s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 280s # have to copy the headers dict so we can safely change it without those 280s # changes being reflected in anyone else's copy. 280s if not http_tunnel_required: 280s headers = headers.copy() # type: ignore[attr-defined] 280s headers.update(self.proxy_headers) # type: ignore[union-attr] 280s 280s # Must keep the exception bound to a separate variable or else Python 3 280s # complains about UnboundLocalError. 280s err = None 280s 280s # Keep track of whether we cleanly exited the except block. This 280s # ensures we do proper cleanup in finally. 280s clean_exit = False 280s 280s # Rewind body position, if needed. Record current position 280s # for future rewinds in the event of a redirect/retry. 280s body_pos = set_file_position(body, body_pos) 280s 280s try: 280s # Request a connection from the queue. 280s timeout_obj = self._get_timeout(timeout) 280s conn = self._get_conn(timeout=pool_timeout) 280s 280s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 280s 280s # Is this a closed/new connection that requires CONNECT tunnelling? 280s if self.proxy is not None and http_tunnel_required and conn.is_closed: 280s try: 280s self._prepare_proxy(conn) 280s except (BaseSSLError, OSError, SocketTimeout) as e: 280s self._raise_timeout( 280s err=e, url=self.proxy.url, timeout_value=conn.timeout 280s ) 280s raise 280s 280s # If we're going to release the connection in ``finally:``, then 280s # the response doesn't need to know about the connection. Otherwise 280s # it will also try to release it and we'll have a double-release 280s # mess. 280s response_conn = conn if not release_conn else None 280s 280s # Make the request on the HTTPConnection object 280s > response = self._make_request( 280s conn, 280s method, 280s url, 280s timeout=timeout_obj, 280s body=body, 280s headers=headers, 280s chunked=chunked, 280s retries=retries, 280s response_conn=response_conn, 280s preload_content=preload_content, 280s decode_content=decode_content, 280s **response_kw, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 280s conn.request( 280s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 280s self.endheaders() 280s /usr/lib/python3.12/http/client.py:1331: in endheaders 280s self._send_output(message_body, encode_chunked=encode_chunked) 280s /usr/lib/python3.12/http/client.py:1091: in _send_output 280s self.send(msg) 280s /usr/lib/python3.12/http/client.py:1035: in send 280s self.connect() 280s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 280s self.sock = self._new_conn() 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s except socket.gaierror as e: 280s raise NameResolutionError(self.host, self, e) from e 280s except SocketTimeout as e: 280s raise ConnectTimeoutError( 280s self, 280s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 280s ) from e 280s 280s except OSError as e: 280s > raise NewConnectionError( 280s self, f"Failed to establish a new connection: {e}" 280s ) from e 280s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s > resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:486: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 280s retries = retries.increment( 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 280s method = 'GET', url = '/a%40b/api/contents', response = None 280s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 280s _pool = 280s _stacktrace = 280s 280s def increment( 280s self, 280s method: str | None = None, 280s url: str | None = None, 280s response: BaseHTTPResponse | None = None, 280s error: Exception | None = None, 280s _pool: ConnectionPool | None = None, 280s _stacktrace: TracebackType | None = None, 280s ) -> Retry: 280s """Return a new Retry object with incremented retry counters. 280s 280s :param response: A response object, or None, if the server did not 280s return a response. 280s :type response: :class:`~urllib3.response.BaseHTTPResponse` 280s :param Exception error: An error encountered during the request, or 280s None if the response was received successfully. 280s 280s :return: A new ``Retry`` object. 280s """ 280s if self.total is False and error: 280s # Disabled, indicate to re-raise the error. 280s raise reraise(type(error), error, _stacktrace) 280s 280s total = self.total 280s if total is not None: 280s total -= 1 280s 280s connect = self.connect 280s read = self.read 280s redirect = self.redirect 280s status_count = self.status 280s other = self.other 280s cause = "unknown" 280s status = None 280s redirect_location = None 280s 280s if error and self._is_connection_error(error): 280s # Connect retry? 280s if connect is False: 280s raise reraise(type(error), error, _stacktrace) 280s elif connect is not None: 280s connect -= 1 280s 280s elif error and self._is_read_error(error): 280s # Read retry? 280s if read is False or method is None or not self._is_method_retryable(method): 280s raise reraise(type(error), error, _stacktrace) 280s elif read is not None: 280s read -= 1 280s 280s elif error: 280s # Other retry? 280s if other is not None: 280s other -= 1 280s 280s elif response and response.get_redirect_location(): 280s # Redirect retry? 280s if redirect is not None: 280s redirect -= 1 280s cause = "too many redirects" 280s response_redirect_location = response.get_redirect_location() 280s if response_redirect_location: 280s redirect_location = response_redirect_location 280s status = response.status 280s 280s else: 280s # Incrementing because of a server error like a 500 in 280s # status_forcelist and the given method is in the allowed_methods 280s cause = ResponseError.GENERIC_ERROR 280s if response and response.status: 280s if status_count is not None: 280s status_count -= 1 280s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 280s status = response.status 280s 280s history = self.history + ( 280s RequestHistory(method, url, error, status, redirect_location), 280s ) 280s 280s new_retry = self.new( 280s total=total, 280s connect=connect, 280s read=read, 280s redirect=redirect, 280s status=status_count, 280s other=other, 280s history=history, 280s ) 280s 280s if new_retry.is_exhausted(): 280s reason = error or ResponseError(cause) 280s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 280s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 280s 280s During handling of the above exception, another exception occurred: 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s > cls.fetch_url(url) 280s 280s notebook/tests/launchnotebook.py:53: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s notebook/tests/launchnotebook.py:82: in fetch_url 280s return requests.get(url) 280s /usr/lib/python3/dist-packages/requests/api.py:73: in get 280s return request("get", url, params=params, **kwargs) 280s /usr/lib/python3/dist-packages/requests/api.py:59: in request 280s return session.request(method=method, url=url, **kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 280s resp = self.send(prep, **send_kwargs) 280s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 280s r = adapter.send(request, **kwargs) 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s self = 280s request = , stream = False 280s timeout = Timeout(connect=None, read=None, total=None), verify = True 280s cert = None, proxies = OrderedDict() 280s 280s def send( 280s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 280s ): 280s """Sends PreparedRequest object. Returns Response object. 280s 280s :param request: The :class:`PreparedRequest ` being sent. 280s :param stream: (optional) Whether to stream the request content. 280s :param timeout: (optional) How long to wait for the server to send 280s data before giving up, as a float, or a :ref:`(connect timeout, 280s read timeout) ` tuple. 280s :type timeout: float or tuple or urllib3 Timeout object 280s :param verify: (optional) Either a boolean, in which case it controls whether 280s we verify the server's TLS certificate, or a string, in which case it 280s must be a path to a CA bundle to use 280s :param cert: (optional) Any user-provided SSL certificate to be trusted. 280s :param proxies: (optional) The proxies dictionary to apply to the request. 280s :rtype: requests.Response 280s """ 280s 280s try: 280s conn = self.get_connection(request.url, proxies) 280s except LocationValueError as e: 280s raise InvalidURL(e, request=request) 280s 280s self.cert_verify(conn, request.url, verify, cert) 280s url = self.request_url(request, proxies) 280s self.add_headers( 280s request, 280s stream=stream, 280s timeout=timeout, 280s verify=verify, 280s cert=cert, 280s proxies=proxies, 280s ) 280s 280s chunked = not (request.body is None or "Content-Length" in request.headers) 280s 280s if isinstance(timeout, tuple): 280s try: 280s connect, read = timeout 280s timeout = TimeoutSauce(connect=connect, read=read) 280s except ValueError: 280s raise ValueError( 280s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 280s f"or a single float to set both timeouts to the same value." 280s ) 280s elif isinstance(timeout, TimeoutSauce): 280s pass 280s else: 280s timeout = TimeoutSauce(connect=timeout, read=timeout) 280s 280s try: 280s resp = conn.urlopen( 280s method=request.method, 280s url=url, 280s body=request.body, 280s headers=request.headers, 280s redirect=False, 280s assert_same_host=False, 280s preload_content=False, 280s decode_content=False, 280s retries=self.max_retries, 280s timeout=timeout, 280s chunked=chunked, 280s ) 280s 280s except (ProtocolError, OSError) as err: 280s raise ConnectionError(err, request=request) 280s 280s except MaxRetryError as e: 280s if isinstance(e.reason, ConnectTimeoutError): 280s # TODO: Remove this in 3.0.0: see #2811 280s if not isinstance(e.reason, NewConnectionError): 280s raise ConnectTimeout(e, request=request) 280s 280s if isinstance(e.reason, ResponseError): 280s raise RetryError(e, request=request) 280s 280s if isinstance(e.reason, _ProxyError): 280s raise ProxyError(e, request=request) 280s 280s if isinstance(e.reason, _SSLError): 280s # This branch is for urllib3 v1.22 and later. 280s raise SSLError(e, request=request) 280s 280s > raise ConnectionError(e, request=request) 280s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 280s 280s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 280s 280s The above exception was the direct cause of the following exception: 280s 280s cls = 280s 280s @classmethod 280s def setup_class(cls): 280s cls.tmp_dir = TemporaryDirectory() 280s def tmp(*parts): 280s path = os.path.join(cls.tmp_dir.name, *parts) 280s try: 280s os.makedirs(path) 280s except OSError as e: 280s if e.errno != errno.EEXIST: 280s raise 280s return path 280s 280s cls.home_dir = tmp('home') 280s data_dir = cls.data_dir = tmp('data') 280s config_dir = cls.config_dir = tmp('config') 280s runtime_dir = cls.runtime_dir = tmp('runtime') 280s cls.notebook_dir = tmp('notebooks') 280s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 280s cls.env_patch.start() 280s # Patch systemwide & user-wide data & config directories, to isolate 280s # the tests from oddities of the local setup. But leave Python env 280s # locations alone, so data files for e.g. nbconvert are accessible. 280s # If this isolation isn't sufficient, you may need to run the tests in 280s # a virtualenv or conda env. 280s cls.path_patch = patch.multiple( 280s jupyter_core.paths, 280s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 280s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 280s ) 280s cls.path_patch.start() 280s 280s config = cls.config or Config() 280s config.NotebookNotary.db_file = ':memory:' 280s 280s cls.token = hexlify(os.urandom(4)).decode('ascii') 280s 280s started = Event() 280s def start_thread(): 280s try: 280s bind_args = cls.get_bind_args() 280s app = cls.notebook = NotebookApp( 280s port_retries=0, 280s open_browser=False, 280s config_dir=cls.config_dir, 280s data_dir=cls.data_dir, 280s runtime_dir=cls.runtime_dir, 280s notebook_dir=cls.notebook_dir, 280s base_url=cls.url_prefix, 280s config=config, 280s allow_root=True, 280s token=cls.token, 280s **bind_args 280s ) 280s if "asyncio" in sys.modules: 280s app._init_asyncio_patch() 280s import asyncio 280s 280s asyncio.set_event_loop(asyncio.new_event_loop()) 280s # Patch the current loop in order to match production 280s # behavior 280s import nest_asyncio 280s 280s nest_asyncio.apply() 280s # don't register signal handler during tests 280s app.init_signal = lambda : None 280s # clear log handlers and propagate to root for nose to capture it 280s # needs to be redone after initialize, which reconfigures logging 280s app.log.propagate = True 280s app.log.handlers = [] 280s app.initialize(argv=cls.get_argv()) 280s app.log.propagate = True 280s app.log.handlers = [] 280s loop = IOLoop.current() 280s loop.add_callback(started.set) 280s app.start() 280s finally: 280s # set the event, so failure to start doesn't cause a hang 280s started.set() 280s app.session_manager.close() 280s cls.notebook_thread = Thread(target=start_thread) 280s cls.notebook_thread.daemon = True 280s cls.notebook_thread.start() 280s started.wait() 280s > cls.wait_until_alive() 280s 280s notebook/tests/launchnotebook.py:198: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s cls = 280s 280s @classmethod 280s def wait_until_alive(cls): 280s """Wait for the server to be alive""" 280s url = cls.base_url() + 'api/contents' 280s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 280s try: 280s cls.fetch_url(url) 280s except ModuleNotFoundError as error: 280s # Errors that should be immediately thrown back to caller 280s raise error 280s except Exception as e: 280s if not cls.notebook_thread.is_alive(): 280s > raise RuntimeError("The notebook server failed to start") from e 280s E RuntimeError: The notebook server failed to start 280s 280s notebook/tests/launchnotebook.py:59: RuntimeError 280s _______________ ERROR at setup of APITest.test_file_checkpoints ________________ 280s 280s self = 280s 280s def _new_conn(self) -> socket.socket: 280s """Establish a socket connection and set nodelay settings on it. 280s 280s :return: New socket connection. 280s """ 280s try: 280s > sock = connection.create_connection( 280s (self._dns_host, self.port), 280s self.timeout, 280s source_address=self.source_address, 280s socket_options=self.socket_options, 280s ) 280s 280s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 280s raise err 280s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 280s 280s address = ('localhost', 12341), timeout = None, source_address = None 280s socket_options = [(6, 1, 1)] 280s 280s def create_connection( 280s address: tuple[str, int], 280s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 280s source_address: tuple[str, int] | None = None, 280s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 280s ) -> socket.socket: 280s """Connect to *address* and return the socket object. 280s 280s Convenience function. Connect to *address* (a 2-tuple ``(host, 280s port)``) and return the socket object. Passing the optional 280s *timeout* parameter will set the timeout on the socket instance 280s before attempting to connect. If no *timeout* is supplied, the 280s global default timeout setting returned by :func:`socket.getdefaulttimeout` 280s is used. If *source_address* is set it must be a tuple of (host, port) 280s for the socket to bind as a source address before making the connection. 280s An host of '' or port 0 tells the OS to use the default. 280s """ 280s 280s host, port = address 280s if host.startswith("["): 280s host = host.strip("[]") 280s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ________________ ERROR at setup of APITest.test_get_404_hidden _________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _________________ ERROR at setup of APITest.test_get_bad_type __________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ___________ ERROR at setup of APITest.test_get_binary_file_contents ____________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ___________ ERROR at setup of APITest.test_get_contents_no_such_file ___________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ______________ ERROR at setup of APITest.test_get_dir_no_content _______________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ________________ ERROR at setup of APITest.test_get_nb_contents ________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ________________ ERROR at setup of APITest.test_get_nb_invalid _________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _______________ ERROR at setup of APITest.test_get_nb_no_content _______________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ____________ ERROR at setup of APITest.test_get_text_file_contents _____________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ___________________ ERROR at setup of APITest.test_list_dirs ___________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _____________ ERROR at setup of APITest.test_list_nonexistant_dir ______________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ________________ ERROR at setup of APITest.test_list_notebooks _________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _____________________ ERROR at setup of APITest.test_mkdir _____________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _______________ ERROR at setup of APITest.test_mkdir_hidden_400 ________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ________________ ERROR at setup of APITest.test_mkdir_untitled _________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ____________________ ERROR at setup of APITest.test_rename _____________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _______________ ERROR at setup of APITest.test_rename_400_hidden _______________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ________________ ERROR at setup of APITest.test_rename_existing ________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _____________________ ERROR at setup of APITest.test_save ______________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ____________________ ERROR at setup of APITest.test_upload _____________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s __________________ ERROR at setup of APITest.test_upload_b64 ___________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s __________________ ERROR at setup of APITest.test_upload_txt ___________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _______________ ERROR at setup of APITest.test_upload_txt_hidden _______________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ___________________ ERROR at setup of APITest.test_upload_v2 ___________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _______ ERROR at setup of GenericFileCheckpointsAPITest.test_checkpoints _______ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _ ERROR at setup of GenericFileCheckpointsAPITest.test_checkpoints_separate_root _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s __ ERROR at setup of GenericFileCheckpointsAPITest.test_config_did_something ___ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s __________ ERROR at setup of GenericFileCheckpointsAPITest.test_copy ___________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_copy_400_hidden _____ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ________ ERROR at setup of GenericFileCheckpointsAPITest.test_copy_copy ________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ______ ERROR at setup of GenericFileCheckpointsAPITest.test_copy_dir_400 _______ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ________ ERROR at setup of GenericFileCheckpointsAPITest.test_copy_path ________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ______ ERROR at setup of GenericFileCheckpointsAPITest.test_copy_put_400 _______ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ___ ERROR at setup of GenericFileCheckpointsAPITest.test_copy_put_400_hidden ___ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_create_untitled _____ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ___ ERROR at setup of GenericFileCheckpointsAPITest.test_create_untitled_txt ___ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ____ ERROR at setup of GenericFileCheckpointsAPITest.test_delete_hidden_dir ____ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ___ ERROR at setup of GenericFileCheckpointsAPITest.test_delete_hidden_file ____ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ____ ERROR at setup of GenericFileCheckpointsAPITest.test_file_checkpoints _____ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_get_404_hidden ______ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ______ ERROR at setup of GenericFileCheckpointsAPITest.test_get_bad_type _______ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _ ERROR at setup of GenericFileCheckpointsAPITest.test_get_binary_file_contents _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _ ERROR at setup of GenericFileCheckpointsAPITest.test_get_contents_no_such_file _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ___ ERROR at setup of GenericFileCheckpointsAPITest.test_get_dir_no_content ____ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_get_nb_contents _____ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_get_nb_invalid ______ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ____ ERROR at setup of GenericFileCheckpointsAPITest.test_get_nb_no_content ____ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _ ERROR at setup of GenericFileCheckpointsAPITest.test_get_text_file_contents __ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ________ ERROR at setup of GenericFileCheckpointsAPITest.test_list_dirs ________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s __ ERROR at setup of GenericFileCheckpointsAPITest.test_list_nonexistant_dir ___ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_list_notebooks ______ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s __________ ERROR at setup of GenericFileCheckpointsAPITest.test_mkdir __________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ____ ERROR at setup of GenericFileCheckpointsAPITest.test_mkdir_hidden_400 _____ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_mkdir_untitled ______ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _________ ERROR at setup of GenericFileCheckpointsAPITest.test_rename __________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ____ ERROR at setup of GenericFileCheckpointsAPITest.test_rename_400_hidden ____ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_rename_existing _____ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s __________ ERROR at setup of GenericFileCheckpointsAPITest.test_save ___________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _________ ERROR at setup of GenericFileCheckpointsAPITest.test_upload __________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _______ ERROR at setup of GenericFileCheckpointsAPITest.test_upload_b64 ________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _______ ERROR at setup of GenericFileCheckpointsAPITest.test_upload_txt ________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ____ ERROR at setup of GenericFileCheckpointsAPITest.test_upload_txt_hidden ____ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ________ ERROR at setup of GenericFileCheckpointsAPITest.test_upload_v2 ________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _______________ ERROR at setup of KernelAPITest.test_connections _______________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _____________ ERROR at setup of KernelAPITest.test_default_kernel ______________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _____________ ERROR at setup of KernelAPITest.test_kernel_handler ______________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ___________ ERROR at setup of KernelAPITest.test_main_kernel_handler ___________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _______________ ERROR at setup of KernelAPITest.test_no_kernels ________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ____________ ERROR at setup of AsyncKernelAPITest.test_connections _____________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 281s raise SkipTest("AsyncKernelAPITest tests skipped due to down-level jupyter_client!") 281s > super().setup_class() 281s 281s notebook/services/kernels/tests/test_kernels_api.py:206: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ___________ ERROR at setup of AsyncKernelAPITest.test_default_kernel ___________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 281s raise SkipTest("AsyncKernelAPITest tests skipped due to down-level jupyter_client!") 281s > super().setup_class() 281s 281s notebook/services/kernels/tests/test_kernels_api.py:206: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ___________ ERROR at setup of AsyncKernelAPITest.test_kernel_handler ___________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 281s raise SkipTest("AsyncKernelAPITest tests skipped due to down-level jupyter_client!") 281s > super().setup_class() 281s 281s notebook/services/kernels/tests/test_kernels_api.py:206: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ________ ERROR at setup of AsyncKernelAPITest.test_main_kernel_handler _________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 281s raise SkipTest("AsyncKernelAPITest tests skipped due to down-level jupyter_client!") 281s > super().setup_class() 281s 281s notebook/services/kernels/tests/test_kernels_api.py:206: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _____________ ERROR at setup of AsyncKernelAPITest.test_no_kernels _____________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 281s raise SkipTest("AsyncKernelAPITest tests skipped due to down-level jupyter_client!") 281s > super().setup_class() 281s 281s notebook/services/kernels/tests/test_kernels_api.py:206: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ________________ ERROR at setup of KernelFilterTest.test_config ________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _______________ ERROR at setup of KernelCullingTest.test_culling _______________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ___________ ERROR at setup of APITest.test_get_kernel_resource_file ____________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ________________ ERROR at setup of APITest.test_get_kernelspec _________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _____________ ERROR at setup of APITest.test_get_kernelspec_spaces _____________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s __________ ERROR at setup of APITest.test_get_nonexistant_kernelspec ___________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ___________ ERROR at setup of APITest.test_get_nonexistant_resource ____________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _______________ ERROR at setup of APITest.test_list_kernelspecs ________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _____________ ERROR at setup of APITest.test_list_kernelspecs_bad ______________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _________________ ERROR at setup of APITest.test_list_formats __________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _________________ ERROR at setup of SessionAPITest.test_create _________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _________ ERROR at setup of SessionAPITest.test_create_console_session _________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ___________ ERROR at setup of SessionAPITest.test_create_deprecated ____________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s __________ ERROR at setup of SessionAPITest.test_create_file_session ___________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _________ ERROR at setup of SessionAPITest.test_create_with_kernel_id __________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _________________ ERROR at setup of SessionAPITest.test_delete _________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ____________ ERROR at setup of SessionAPITest.test_modify_kernel_id ____________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ___________ ERROR at setup of SessionAPITest.test_modify_kernel_name ___________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ______________ ERROR at setup of SessionAPITest.test_modify_path _______________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _________ ERROR at setup of SessionAPITest.test_modify_path_deprecated _________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ______________ ERROR at setup of SessionAPITest.test_modify_type _______________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ______________ ERROR at setup of AsyncSessionAPITest.test_create _______________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 281s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 281s > super().setup_class() 281s 281s notebook/services/sessions/tests/test_sessions_api.py:274: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ______ ERROR at setup of AsyncSessionAPITest.test_create_console_session _______ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 281s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 281s > super().setup_class() 281s 281s notebook/services/sessions/tests/test_sessions_api.py:274: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _________ ERROR at setup of AsyncSessionAPITest.test_create_deprecated _________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 281s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 281s > super().setup_class() 281s 281s notebook/services/sessions/tests/test_sessions_api.py:274: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ________ ERROR at setup of AsyncSessionAPITest.test_create_file_session ________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 281s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 281s > super().setup_class() 281s 281s notebook/services/sessions/tests/test_sessions_api.py:274: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _______ ERROR at setup of AsyncSessionAPITest.test_create_with_kernel_id _______ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 281s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 281s > super().setup_class() 281s 281s notebook/services/sessions/tests/test_sessions_api.py:274: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ______________ ERROR at setup of AsyncSessionAPITest.test_delete _______________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 281s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 281s > super().setup_class() 281s 281s notebook/services/sessions/tests/test_sessions_api.py:274: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _________ ERROR at setup of AsyncSessionAPITest.test_modify_kernel_id __________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 281s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 281s > super().setup_class() 281s 281s notebook/services/sessions/tests/test_sessions_api.py:274: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ________ ERROR at setup of AsyncSessionAPITest.test_modify_kernel_name _________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 281s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 281s > super().setup_class() 281s 281s notebook/services/sessions/tests/test_sessions_api.py:274: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ____________ ERROR at setup of AsyncSessionAPITest.test_modify_path ____________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 281s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 281s > super().setup_class() 281s 281s notebook/services/sessions/tests/test_sessions_api.py:274: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ______ ERROR at setup of AsyncSessionAPITest.test_modify_path_deprecated _______ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 281s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 281s > super().setup_class() 281s 281s notebook/services/sessions/tests/test_sessions_api.py:274: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ____________ ERROR at setup of AsyncSessionAPITest.test_modify_type ____________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 281s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 281s > super().setup_class() 281s 281s notebook/services/sessions/tests/test_sessions_api.py:274: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ____________ ERROR at setup of TerminalAPITest.test_create_terminal ____________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ________ ERROR at setup of TerminalAPITest.test_create_terminal_via_get ________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _______ ERROR at setup of TerminalAPITest.test_create_terminal_with_name _______ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _____________ ERROR at setup of TerminalAPITest.test_no_terminals ______________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ___________ ERROR at setup of TerminalAPITest.test_terminal_handler ____________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _________ ERROR at setup of TerminalAPITest.test_terminal_root_handler _________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ______________ ERROR at setup of TerminalCullingTest.test_config _______________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ______________ ERROR at setup of TerminalCullingTest.test_culling ______________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ______________ ERROR at setup of FilesTest.test_contents_manager _______________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s __________________ ERROR at setup of FilesTest.test_download ___________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ________________ ERROR at setup of FilesTest.test_hidden_files _________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _____________ ERROR at setup of FilesTest.test_old_files_redirect ______________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s __________________ ERROR at setup of FilesTest.test_view_html __________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s __________ ERROR at setup of TestGateway.test_gateway_class_mappings ___________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s GatewayClient.clear_instance() 281s > super().setup_class() 281s 281s notebook/tests/test_gateway.py:138: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s __________ ERROR at setup of TestGateway.test_gateway_get_kernelspecs __________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s GatewayClient.clear_instance() 281s > super().setup_class() 281s 281s notebook/tests/test_gateway.py:138: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _______ ERROR at setup of TestGateway.test_gateway_get_named_kernelspec ________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s GatewayClient.clear_instance() 281s > super().setup_class() 281s 281s notebook/tests/test_gateway.py:138: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _________ ERROR at setup of TestGateway.test_gateway_kernel_lifecycle __________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s GatewayClient.clear_instance() 281s > super().setup_class() 281s 281s notebook/tests/test_gateway.py:138: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ______________ ERROR at setup of TestGateway.test_gateway_options ______________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s GatewayClient.clear_instance() 281s > super().setup_class() 281s 281s notebook/tests/test_gateway.py:138: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _________ ERROR at setup of TestGateway.test_gateway_session_lifecycle _________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s GatewayClient.clear_instance() 281s > super().setup_class() 281s 281s notebook/tests/test_gateway.py:138: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _________ ERROR at setup of NotebookAppTests.test_list_running_servers _________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ___________ ERROR at setup of NotebookAppTests.test_log_json_default ___________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s __________ ERROR at setup of NotebookAppTests.test_validate_log_json ___________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ___ ERROR at setup of NotebookUnixSocketTests.test_list_running_sock_servers ___ 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def connect(self): 281s sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) 281s sock.settimeout(self.timeout) 281s socket_path = unquote(urlparse(self.unix_socket_url).netloc) 281s > sock.connect(socket_path) 281s E FileNotFoundError: [Errno 2] No such file or directory 281s 281s /usr/lib/python3/dist-packages/requests_unixsocket/adapters.py:36: FileNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None 281s proxies = OrderedDict({'no': '127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,p...,objectstorage.prodstack5.canonical.com', 'https': 'http://squid.internal:3128', 'http': 'http://squid.internal:3128'}) 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:470: in increment 281s raise reraise(type(error), error, _stacktrace) 281s /usr/lib/python3/dist-packages/urllib3/util/util.py:38: in reraise 281s raise value.with_traceback(tb) 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: in urlopen 281s response = self._make_request( 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def connect(self): 281s sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) 281s sock.settimeout(self.timeout) 281s socket_path = unquote(urlparse(self.unix_socket_url).netloc) 281s > sock.connect(socket_path) 281s E urllib3.exceptions.ProtocolError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) 281s 281s /usr/lib/python3/dist-packages/requests_unixsocket/adapters.py:36: ProtocolError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:242: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests_unixsocket/__init__.py:51: in get 281s return request('get', url, **kwargs) 281s /usr/lib/python3/dist-packages/requests_unixsocket/__init__.py:46: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None 281s proxies = OrderedDict({'no': '127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,p...,objectstorage.prodstack5.canonical.com', 'https': 'http://squid.internal:3128', 'http': 'http://squid.internal:3128'}) 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s > raise ConnectionError(err, request=request) 281s E requests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:501: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ______________ ERROR at setup of NotebookUnixSocketTests.test_run ______________ 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def connect(self): 281s sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) 281s sock.settimeout(self.timeout) 281s socket_path = unquote(urlparse(self.unix_socket_url).netloc) 281s > sock.connect(socket_path) 281s E FileNotFoundError: [Errno 2] No such file or directory 281s 281s /usr/lib/python3/dist-packages/requests_unixsocket/adapters.py:36: FileNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None 281s proxies = OrderedDict({'no': '127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,p...,objectstorage.prodstack5.canonical.com', 'https': 'http://squid.internal:3128', 'http': 'http://squid.internal:3128'}) 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:470: in increment 281s raise reraise(type(error), error, _stacktrace) 281s /usr/lib/python3/dist-packages/urllib3/util/util.py:38: in reraise 281s raise value.with_traceback(tb) 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: in urlopen 281s response = self._make_request( 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def connect(self): 281s sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) 281s sock.settimeout(self.timeout) 281s socket_path = unquote(urlparse(self.unix_socket_url).netloc) 281s > sock.connect(socket_path) 281s E urllib3.exceptions.ProtocolError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) 281s 281s /usr/lib/python3/dist-packages/requests_unixsocket/adapters.py:36: ProtocolError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:242: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests_unixsocket/__init__.py:51: in get 281s return request('get', url, **kwargs) 281s /usr/lib/python3/dist-packages/requests_unixsocket/__init__.py:46: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None 281s proxies = OrderedDict({'no': '127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,p...,objectstorage.prodstack5.canonical.com', 'https': 'http://squid.internal:3128', 'http': 'http://squid.internal:3128'}) 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s > raise ConnectionError(err, request=request) 281s E requests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:501: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _____ ERROR at setup of NotebookAppJSONLoggingTests.test_log_json_enabled ______ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s > super().setup_class() 281s 281s notebook/tests/test_notebookapp.py:212: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s _____ ERROR at setup of NotebookAppJSONLoggingTests.test_validate_log_json _____ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s > super().setup_class() 281s 281s notebook/tests/test_notebookapp.py:212: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:198: in setup_class 281s cls.wait_until_alive() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ____________ ERROR at setup of RedirectTestCase.test_trailing_slash ____________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s ___________________ ERROR at setup of TreeTest.test_redirect ___________________ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s > sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 281s raise err 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s address = ('localhost', 12341), timeout = None, source_address = None 281s socket_options = [(6, 1, 1)] 281s 281s def create_connection( 281s address: tuple[str, int], 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s source_address: tuple[str, int] | None = None, 281s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 281s ) -> socket.socket: 281s """Connect to *address* and return the socket object. 281s 281s Convenience function. Connect to *address* (a 2-tuple ``(host, 281s port)``) and return the socket object. Passing the optional 281s *timeout* parameter will set the timeout on the socket instance 281s before attempting to connect. If no *timeout* is supplied, the 281s global default timeout setting returned by :func:`socket.getdefaulttimeout` 281s is used. If *source_address* is set it must be a tuple of (host, port) 281s for the socket to bind as a source address before making the connection. 281s An host of '' or port 0 tells the OS to use the default. 281s """ 281s 281s host, port = address 281s if host.startswith("["): 281s host = host.strip("[]") 281s err = None 281s 281s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 281s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 281s # The original create_connection function always returns all records. 281s family = allowed_gai_family() 281s 281s try: 281s host.encode("idna") 281s except UnicodeError: 281s raise LocationParseError(f"'{host}', label empty or too long") from None 281s 281s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 281s af, socktype, proto, canonname, sa = res 281s sock = None 281s try: 281s sock = socket.socket(af, socktype, proto) 281s 281s # If provided, set socket level options before connecting. 281s _set_socket_options(sock, socket_options) 281s 281s if timeout is not _DEFAULT_TIMEOUT: 281s sock.settimeout(timeout) 281s if source_address: 281s sock.bind(source_address) 281s > sock.connect(sa) 281s E ConnectionRefusedError: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s method = 'GET', url = '/a%40b/api/contents', body = None 281s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 281s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s redirect = False, assert_same_host = False 281s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 281s release_conn = False, chunked = False, body_pos = None, preload_content = False 281s decode_content = False, response_kw = {} 281s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 281s destination_scheme = None, conn = None, release_this_conn = True 281s http_tunnel_required = False, err = None, clean_exit = False 281s 281s def urlopen( # type: ignore[override] 281s self, 281s method: str, 281s url: str, 281s body: _TYPE_BODY | None = None, 281s headers: typing.Mapping[str, str] | None = None, 281s retries: Retry | bool | int | None = None, 281s redirect: bool = True, 281s assert_same_host: bool = True, 281s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 281s pool_timeout: int | None = None, 281s release_conn: bool | None = None, 281s chunked: bool = False, 281s body_pos: _TYPE_BODY_POSITION | None = None, 281s preload_content: bool = True, 281s decode_content: bool = True, 281s **response_kw: typing.Any, 281s ) -> BaseHTTPResponse: 281s """ 281s Get a connection from the pool and perform an HTTP request. This is the 281s lowest level call for making a request, so you'll need to specify all 281s the raw details. 281s 281s .. note:: 281s 281s More commonly, it's appropriate to use a convenience method 281s such as :meth:`request`. 281s 281s .. note:: 281s 281s `release_conn` will only behave as expected if 281s `preload_content=False` because we want to make 281s `preload_content=False` the default behaviour someday soon without 281s breaking backwards compatibility. 281s 281s :param method: 281s HTTP request method (such as GET, POST, PUT, etc.) 281s 281s :param url: 281s The URL to perform the request on. 281s 281s :param body: 281s Data to send in the request body, either :class:`str`, :class:`bytes`, 281s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 281s 281s :param headers: 281s Dictionary of custom headers to send, such as User-Agent, 281s If-None-Match, etc. If None, pool headers are used. If provided, 281s these headers completely replace any pool-specific headers. 281s 281s :param retries: 281s Configure the number of retries to allow before raising a 281s :class:`~urllib3.exceptions.MaxRetryError` exception. 281s 281s Pass ``None`` to retry until you receive a response. Pass a 281s :class:`~urllib3.util.retry.Retry` object for fine-grained control 281s over different types of retries. 281s Pass an integer number to retry connection errors that many times, 281s but no other types of errors. Pass zero to never retry. 281s 281s If ``False``, then retries are disabled and any exception is raised 281s immediately. Also, instead of raising a MaxRetryError on redirects, 281s the redirect response will be returned. 281s 281s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 281s 281s :param redirect: 281s If True, automatically handle redirects (status codes 301, 302, 281s 303, 307, 308). Each redirect counts as a retry. Disabling retries 281s will disable redirect, too. 281s 281s :param assert_same_host: 281s If ``True``, will make sure that the host of the pool requests is 281s consistent else will raise HostChangedError. When ``False``, you can 281s use the pool on an HTTP proxy and request foreign hosts. 281s 281s :param timeout: 281s If specified, overrides the default timeout for this one 281s request. It may be a float (in seconds) or an instance of 281s :class:`urllib3.util.Timeout`. 281s 281s :param pool_timeout: 281s If set and the pool is set to block=True, then this method will 281s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 281s connection is available within the time period. 281s 281s :param bool preload_content: 281s If True, the response's body will be preloaded into memory. 281s 281s :param bool decode_content: 281s If True, will attempt to decode the body based on the 281s 'content-encoding' header. 281s 281s :param release_conn: 281s If False, then the urlopen call will not release the connection 281s back into the pool once a response is received (but will release if 281s you read the entire contents of the response such as when 281s `preload_content=True`). This is useful if you're not preloading 281s the response's content immediately. You will need to call 281s ``r.release_conn()`` on the response ``r`` to return the connection 281s back into the pool. If None, it takes the value of ``preload_content`` 281s which defaults to ``True``. 281s 281s :param bool chunked: 281s If True, urllib3 will send the body using chunked transfer 281s encoding. Otherwise, urllib3 will send the body using the standard 281s content-length form. Defaults to False. 281s 281s :param int body_pos: 281s Position to seek to in file-like body in the event of a retry or 281s redirect. Typically this won't need to be set because urllib3 will 281s auto-populate the value when needed. 281s """ 281s parsed_url = parse_url(url) 281s destination_scheme = parsed_url.scheme 281s 281s if headers is None: 281s headers = self.headers 281s 281s if not isinstance(retries, Retry): 281s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 281s 281s if release_conn is None: 281s release_conn = preload_content 281s 281s # Check host 281s if assert_same_host and not self.is_same_host(url): 281s raise HostChangedError(self, url, retries) 281s 281s # Ensure that the URL we're connecting to is properly encoded 281s if url.startswith("/"): 281s url = to_str(_encode_target(url)) 281s else: 281s url = to_str(parsed_url.url) 281s 281s conn = None 281s 281s # Track whether `conn` needs to be released before 281s # returning/raising/recursing. Update this variable if necessary, and 281s # leave `release_conn` constant throughout the function. That way, if 281s # the function recurses, the original value of `release_conn` will be 281s # passed down into the recursive call, and its value will be respected. 281s # 281s # See issue #651 [1] for details. 281s # 281s # [1] 281s release_this_conn = release_conn 281s 281s http_tunnel_required = connection_requires_http_tunnel( 281s self.proxy, self.proxy_config, destination_scheme 281s ) 281s 281s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 281s # have to copy the headers dict so we can safely change it without those 281s # changes being reflected in anyone else's copy. 281s if not http_tunnel_required: 281s headers = headers.copy() # type: ignore[attr-defined] 281s headers.update(self.proxy_headers) # type: ignore[union-attr] 281s 281s # Must keep the exception bound to a separate variable or else Python 3 281s # complains about UnboundLocalError. 281s err = None 281s 281s # Keep track of whether we cleanly exited the except block. This 281s # ensures we do proper cleanup in finally. 281s clean_exit = False 281s 281s # Rewind body position, if needed. Record current position 281s # for future rewinds in the event of a redirect/retry. 281s body_pos = set_file_position(body, body_pos) 281s 281s try: 281s # Request a connection from the queue. 281s timeout_obj = self._get_timeout(timeout) 281s conn = self._get_conn(timeout=pool_timeout) 281s 281s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 281s 281s # Is this a closed/new connection that requires CONNECT tunnelling? 281s if self.proxy is not None and http_tunnel_required and conn.is_closed: 281s try: 281s self._prepare_proxy(conn) 281s except (BaseSSLError, OSError, SocketTimeout) as e: 281s self._raise_timeout( 281s err=e, url=self.proxy.url, timeout_value=conn.timeout 281s ) 281s raise 281s 281s # If we're going to release the connection in ``finally:``, then 281s # the response doesn't need to know about the connection. Otherwise 281s # it will also try to release it and we'll have a double-release 281s # mess. 281s response_conn = conn if not release_conn else None 281s 281s # Make the request on the HTTPConnection object 281s > response = self._make_request( 281s conn, 281s method, 281s url, 281s timeout=timeout_obj, 281s body=body, 281s headers=headers, 281s chunked=chunked, 281s retries=retries, 281s response_conn=response_conn, 281s preload_content=preload_content, 281s decode_content=decode_content, 281s **response_kw, 281s ) 281s 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 281s conn.request( 281s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 281s self.endheaders() 281s /usr/lib/python3.12/http/client.py:1331: in endheaders 281s self._send_output(message_body, encode_chunked=encode_chunked) 281s /usr/lib/python3.12/http/client.py:1091: in _send_output 281s self.send(msg) 281s /usr/lib/python3.12/http/client.py:1035: in send 281s self.connect() 281s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 281s self.sock = self._new_conn() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _new_conn(self) -> socket.socket: 281s """Establish a socket connection and set nodelay settings on it. 281s 281s :return: New socket connection. 281s """ 281s try: 281s sock = connection.create_connection( 281s (self._dns_host, self.port), 281s self.timeout, 281s source_address=self.source_address, 281s socket_options=self.socket_options, 281s ) 281s except socket.gaierror as e: 281s raise NameResolutionError(self.host, self, e) from e 281s except SocketTimeout as e: 281s raise ConnectTimeoutError( 281s self, 281s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 281s ) from e 281s 281s except OSError as e: 281s > raise NewConnectionError( 281s self, f"Failed to establish a new connection: {e}" 281s ) from e 281s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 281s 281s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s > resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:486: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 281s retries = retries.increment( 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 281s method = 'GET', url = '/a%40b/api/contents', response = None 281s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 281s _pool = 281s _stacktrace = 281s 281s def increment( 281s self, 281s method: str | None = None, 281s url: str | None = None, 281s response: BaseHTTPResponse | None = None, 281s error: Exception | None = None, 281s _pool: ConnectionPool | None = None, 281s _stacktrace: TracebackType | None = None, 281s ) -> Retry: 281s """Return a new Retry object with incremented retry counters. 281s 281s :param response: A response object, or None, if the server did not 281s return a response. 281s :type response: :class:`~urllib3.response.BaseHTTPResponse` 281s :param Exception error: An error encountered during the request, or 281s None if the response was received successfully. 281s 281s :return: A new ``Retry`` object. 281s """ 281s if self.total is False and error: 281s # Disabled, indicate to re-raise the error. 281s raise reraise(type(error), error, _stacktrace) 281s 281s total = self.total 281s if total is not None: 281s total -= 1 281s 281s connect = self.connect 281s read = self.read 281s redirect = self.redirect 281s status_count = self.status 281s other = self.other 281s cause = "unknown" 281s status = None 281s redirect_location = None 281s 281s if error and self._is_connection_error(error): 281s # Connect retry? 281s if connect is False: 281s raise reraise(type(error), error, _stacktrace) 281s elif connect is not None: 281s connect -= 1 281s 281s elif error and self._is_read_error(error): 281s # Read retry? 281s if read is False or method is None or not self._is_method_retryable(method): 281s raise reraise(type(error), error, _stacktrace) 281s elif read is not None: 281s read -= 1 281s 281s elif error: 281s # Other retry? 281s if other is not None: 281s other -= 1 281s 281s elif response and response.get_redirect_location(): 281s # Redirect retry? 281s if redirect is not None: 281s redirect -= 1 281s cause = "too many redirects" 281s response_redirect_location = response.get_redirect_location() 281s if response_redirect_location: 281s redirect_location = response_redirect_location 281s status = response.status 281s 281s else: 281s # Incrementing because of a server error like a 500 in 281s # status_forcelist and the given method is in the allowed_methods 281s cause = ResponseError.GENERIC_ERROR 281s if response and response.status: 281s if status_count is not None: 281s status_count -= 1 281s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 281s status = response.status 281s 281s history = self.history + ( 281s RequestHistory(method, url, error, status, redirect_location), 281s ) 281s 281s new_retry = self.new( 281s total=total, 281s connect=connect, 281s read=read, 281s redirect=redirect, 281s status=status_count, 281s other=other, 281s history=history, 281s ) 281s 281s if new_retry.is_exhausted(): 281s reason = error or ResponseError(cause) 281s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 281s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 281s 281s During handling of the above exception, another exception occurred: 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s > cls.fetch_url(url) 281s 281s notebook/tests/launchnotebook.py:53: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s notebook/tests/launchnotebook.py:82: in fetch_url 281s return requests.get(url) 281s /usr/lib/python3/dist-packages/requests/api.py:73: in get 281s return request("get", url, params=params, **kwargs) 281s /usr/lib/python3/dist-packages/requests/api.py:59: in request 281s return session.request(method=method, url=url, **kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 281s resp = self.send(prep, **send_kwargs) 281s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 281s r = adapter.send(request, **kwargs) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s request = , stream = False 281s timeout = Timeout(connect=None, read=None, total=None), verify = True 281s cert = None, proxies = OrderedDict() 281s 281s def send( 281s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 281s ): 281s """Sends PreparedRequest object. Returns Response object. 281s 281s :param request: The :class:`PreparedRequest ` being sent. 281s :param stream: (optional) Whether to stream the request content. 281s :param timeout: (optional) How long to wait for the server to send 281s data before giving up, as a float, or a :ref:`(connect timeout, 281s read timeout) ` tuple. 281s :type timeout: float or tuple or urllib3 Timeout object 281s :param verify: (optional) Either a boolean, in which case it controls whether 281s we verify the server's TLS certificate, or a string, in which case it 281s must be a path to a CA bundle to use 281s :param cert: (optional) Any user-provided SSL certificate to be trusted. 281s :param proxies: (optional) The proxies dictionary to apply to the request. 281s :rtype: requests.Response 281s """ 281s 281s try: 281s conn = self.get_connection(request.url, proxies) 281s except LocationValueError as e: 281s raise InvalidURL(e, request=request) 281s 281s self.cert_verify(conn, request.url, verify, cert) 281s url = self.request_url(request, proxies) 281s self.add_headers( 281s request, 281s stream=stream, 281s timeout=timeout, 281s verify=verify, 281s cert=cert, 281s proxies=proxies, 281s ) 281s 281s chunked = not (request.body is None or "Content-Length" in request.headers) 281s 281s if isinstance(timeout, tuple): 281s try: 281s connect, read = timeout 281s timeout = TimeoutSauce(connect=connect, read=read) 281s except ValueError: 281s raise ValueError( 281s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 281s f"or a single float to set both timeouts to the same value." 281s ) 281s elif isinstance(timeout, TimeoutSauce): 281s pass 281s else: 281s timeout = TimeoutSauce(connect=timeout, read=timeout) 281s 281s try: 281s resp = conn.urlopen( 281s method=request.method, 281s url=url, 281s body=request.body, 281s headers=request.headers, 281s redirect=False, 281s assert_same_host=False, 281s preload_content=False, 281s decode_content=False, 281s retries=self.max_retries, 281s timeout=timeout, 281s chunked=chunked, 281s ) 281s 281s except (ProtocolError, OSError) as err: 281s raise ConnectionError(err, request=request) 281s 281s except MaxRetryError as e: 281s if isinstance(e.reason, ConnectTimeoutError): 281s # TODO: Remove this in 3.0.0: see #2811 281s if not isinstance(e.reason, NewConnectionError): 281s raise ConnectTimeout(e, request=request) 281s 281s if isinstance(e.reason, ResponseError): 281s raise RetryError(e, request=request) 281s 281s if isinstance(e.reason, _ProxyError): 281s raise ProxyError(e, request=request) 281s 281s if isinstance(e.reason, _SSLError): 281s # This branch is for urllib3 v1.22 and later. 281s raise SSLError(e, request=request) 281s 281s > raise ConnectionError(e, request=request) 281s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 281s 281s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 281s 281s The above exception was the direct cause of the following exception: 281s 281s cls = 281s 281s @classmethod 281s def setup_class(cls): 281s cls.tmp_dir = TemporaryDirectory() 281s def tmp(*parts): 281s path = os.path.join(cls.tmp_dir.name, *parts) 281s try: 281s os.makedirs(path) 281s except OSError as e: 281s if e.errno != errno.EEXIST: 281s raise 281s return path 281s 281s cls.home_dir = tmp('home') 281s data_dir = cls.data_dir = tmp('data') 281s config_dir = cls.config_dir = tmp('config') 281s runtime_dir = cls.runtime_dir = tmp('runtime') 281s cls.notebook_dir = tmp('notebooks') 281s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 281s cls.env_patch.start() 281s # Patch systemwide & user-wide data & config directories, to isolate 281s # the tests from oddities of the local setup. But leave Python env 281s # locations alone, so data files for e.g. nbconvert are accessible. 281s # If this isolation isn't sufficient, you may need to run the tests in 281s # a virtualenv or conda env. 281s cls.path_patch = patch.multiple( 281s jupyter_core.paths, 281s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 281s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 281s ) 281s cls.path_patch.start() 281s 281s config = cls.config or Config() 281s config.NotebookNotary.db_file = ':memory:' 281s 281s cls.token = hexlify(os.urandom(4)).decode('ascii') 281s 281s started = Event() 281s def start_thread(): 281s try: 281s bind_args = cls.get_bind_args() 281s app = cls.notebook = NotebookApp( 281s port_retries=0, 281s open_browser=False, 281s config_dir=cls.config_dir, 281s data_dir=cls.data_dir, 281s runtime_dir=cls.runtime_dir, 281s notebook_dir=cls.notebook_dir, 281s base_url=cls.url_prefix, 281s config=config, 281s allow_root=True, 281s token=cls.token, 281s **bind_args 281s ) 281s if "asyncio" in sys.modules: 281s app._init_asyncio_patch() 281s import asyncio 281s 281s asyncio.set_event_loop(asyncio.new_event_loop()) 281s # Patch the current loop in order to match production 281s # behavior 281s import nest_asyncio 281s 281s nest_asyncio.apply() 281s # don't register signal handler during tests 281s app.init_signal = lambda : None 281s # clear log handlers and propagate to root for nose to capture it 281s # needs to be redone after initialize, which reconfigures logging 281s app.log.propagate = True 281s app.log.handlers = [] 281s app.initialize(argv=cls.get_argv()) 281s app.log.propagate = True 281s app.log.handlers = [] 281s loop = IOLoop.current() 281s loop.add_callback(started.set) 281s app.start() 281s finally: 281s # set the event, so failure to start doesn't cause a hang 281s started.set() 281s app.session_manager.close() 281s cls.notebook_thread = Thread(target=start_thread) 281s cls.notebook_thread.daemon = True 281s cls.notebook_thread.start() 281s started.wait() 281s > cls.wait_until_alive() 281s 281s notebook/tests/launchnotebook.py:198: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s cls = 281s 281s @classmethod 281s def wait_until_alive(cls): 281s """Wait for the server to be alive""" 281s url = cls.base_url() + 'api/contents' 281s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 281s try: 281s cls.fetch_url(url) 281s except ModuleNotFoundError as error: 281s # Errors that should be immediately thrown back to caller 281s raise error 281s except Exception as e: 281s if not cls.notebook_thread.is_alive(): 281s > raise RuntimeError("The notebook server failed to start") from e 281s E RuntimeError: The notebook server failed to start 281s 281s notebook/tests/launchnotebook.py:59: RuntimeError 281s =================================== FAILURES =================================== 281s __________________ TestSessionManager.test_bad_delete_session __________________ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s > klass = self._resolve_string(klass) 281s 281s notebook/traittypes.py:336: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 281s return import_item(string) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s name = 'jupyter_server.services.contents.manager.ContentsManager' 281s 281s def import_item(name: str) -> Any: 281s """Import and return ``bar`` given the string ``foo.bar``. 281s 281s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 281s executing the code ``from foo import bar``. 281s 281s Parameters 281s ---------- 281s name : string 281s The fully qualified name of the module/package being imported. 281s 281s Returns 281s ------- 281s mod : module object 281s The module that was imported. 281s """ 281s if not isinstance(name, str): 281s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 281s parts = name.rsplit(".", 1) 281s if len(parts) == 2: 281s # called with 'foo.bar....' 281s package, obj = parts 281s > module = __import__(package, fromlist=[obj]) 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s 281s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s self = 281s 281s def setUp(self): 281s > self.sm = SessionManager( 281s kernel_manager=DummyMKM(), 281s contents_manager=ContentsManager(), 281s ) 281s 281s notebook/services/sessions/tests/test_sessionmanager.py:45: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 281s inst.setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 281s init(self) 281s notebook/traittypes.py:327: in instance_init 281s self._resolve_classes() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s klass = self._resolve_string(klass) 281s self.importable_klasses.append(klass) 281s except: 281s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s notebook/traittypes.py:339: TypeError 281s ___________________ TestSessionManager.test_bad_get_session ____________________ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s > klass = self._resolve_string(klass) 281s 281s notebook/traittypes.py:336: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 281s return import_item(string) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s name = 'jupyter_server.services.contents.manager.ContentsManager' 281s 281s def import_item(name: str) -> Any: 281s """Import and return ``bar`` given the string ``foo.bar``. 281s 281s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 281s executing the code ``from foo import bar``. 281s 281s Parameters 281s ---------- 281s name : string 281s The fully qualified name of the module/package being imported. 281s 281s Returns 281s ------- 281s mod : module object 281s The module that was imported. 281s """ 281s if not isinstance(name, str): 281s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 281s parts = name.rsplit(".", 1) 281s if len(parts) == 2: 281s # called with 'foo.bar....' 281s package, obj = parts 281s > module = __import__(package, fromlist=[obj]) 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s 281s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s self = 281s 281s def setUp(self): 281s > self.sm = SessionManager( 281s kernel_manager=DummyMKM(), 281s contents_manager=ContentsManager(), 281s ) 281s 281s notebook/services/sessions/tests/test_sessionmanager.py:45: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 281s inst.setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 281s init(self) 281s notebook/traittypes.py:327: in instance_init 281s self._resolve_classes() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s klass = self._resolve_string(klass) 281s self.importable_klasses.append(klass) 281s except: 281s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s notebook/traittypes.py:339: TypeError 281s __________________ TestSessionManager.test_bad_update_session __________________ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s > klass = self._resolve_string(klass) 281s 281s notebook/traittypes.py:336: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 281s return import_item(string) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s name = 'jupyter_server.services.contents.manager.ContentsManager' 281s 281s def import_item(name: str) -> Any: 281s """Import and return ``bar`` given the string ``foo.bar``. 281s 281s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 281s executing the code ``from foo import bar``. 281s 281s Parameters 281s ---------- 281s name : string 281s The fully qualified name of the module/package being imported. 281s 281s Returns 281s ------- 281s mod : module object 281s The module that was imported. 281s """ 281s if not isinstance(name, str): 281s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 281s parts = name.rsplit(".", 1) 281s if len(parts) == 2: 281s # called with 'foo.bar....' 281s package, obj = parts 281s > module = __import__(package, fromlist=[obj]) 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s 281s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s self = 281s 281s def setUp(self): 281s > self.sm = SessionManager( 281s kernel_manager=DummyMKM(), 281s contents_manager=ContentsManager(), 281s ) 281s 281s notebook/services/sessions/tests/test_sessionmanager.py:45: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 281s inst.setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 281s init(self) 281s notebook/traittypes.py:327: in instance_init 281s self._resolve_classes() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s klass = self._resolve_string(klass) 281s self.importable_klasses.append(klass) 281s except: 281s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s notebook/traittypes.py:339: TypeError 281s ____________________ TestSessionManager.test_delete_session ____________________ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s > klass = self._resolve_string(klass) 281s 281s notebook/traittypes.py:336: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 281s return import_item(string) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s name = 'jupyter_server.services.contents.manager.ContentsManager' 281s 281s def import_item(name: str) -> Any: 281s """Import and return ``bar`` given the string ``foo.bar``. 281s 281s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 281s executing the code ``from foo import bar``. 281s 281s Parameters 281s ---------- 281s name : string 281s The fully qualified name of the module/package being imported. 281s 281s Returns 281s ------- 281s mod : module object 281s The module that was imported. 281s """ 281s if not isinstance(name, str): 281s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 281s parts = name.rsplit(".", 1) 281s if len(parts) == 2: 281s # called with 'foo.bar....' 281s package, obj = parts 281s > module = __import__(package, fromlist=[obj]) 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s 281s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s self = 281s 281s def setUp(self): 281s > self.sm = SessionManager( 281s kernel_manager=DummyMKM(), 281s contents_manager=ContentsManager(), 281s ) 281s 281s notebook/services/sessions/tests/test_sessionmanager.py:45: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 281s inst.setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 281s init(self) 281s notebook/traittypes.py:327: in instance_init 281s self._resolve_classes() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s klass = self._resolve_string(klass) 281s self.importable_klasses.append(klass) 281s except: 281s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s notebook/traittypes.py:339: TypeError 281s _____________________ TestSessionManager.test_get_session ______________________ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s > klass = self._resolve_string(klass) 281s 281s notebook/traittypes.py:336: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 281s return import_item(string) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s name = 'jupyter_server.services.contents.manager.ContentsManager' 281s 281s def import_item(name: str) -> Any: 281s """Import and return ``bar`` given the string ``foo.bar``. 281s 281s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 281s executing the code ``from foo import bar``. 281s 281s Parameters 281s ---------- 281s name : string 281s The fully qualified name of the module/package being imported. 281s 281s Returns 281s ------- 281s mod : module object 281s The module that was imported. 281s """ 281s if not isinstance(name, str): 281s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 281s parts = name.rsplit(".", 1) 281s if len(parts) == 2: 281s # called with 'foo.bar....' 281s package, obj = parts 281s > module = __import__(package, fromlist=[obj]) 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s 281s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s self = 281s 281s def setUp(self): 281s > self.sm = SessionManager( 281s kernel_manager=DummyMKM(), 281s contents_manager=ContentsManager(), 281s ) 281s 281s notebook/services/sessions/tests/test_sessionmanager.py:45: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 281s inst.setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 281s init(self) 281s notebook/traittypes.py:327: in instance_init 281s self._resolve_classes() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s klass = self._resolve_string(klass) 281s self.importable_klasses.append(klass) 281s except: 281s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s notebook/traittypes.py:339: TypeError 281s _______________ TestSessionManager.test_get_session_dead_kernel ________________ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s > klass = self._resolve_string(klass) 281s 281s notebook/traittypes.py:336: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 281s return import_item(string) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s name = 'jupyter_server.services.contents.manager.ContentsManager' 281s 281s def import_item(name: str) -> Any: 281s """Import and return ``bar`` given the string ``foo.bar``. 281s 281s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 281s executing the code ``from foo import bar``. 281s 281s Parameters 281s ---------- 281s name : string 281s The fully qualified name of the module/package being imported. 281s 281s Returns 281s ------- 281s mod : module object 281s The module that was imported. 281s """ 281s if not isinstance(name, str): 281s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 281s parts = name.rsplit(".", 1) 281s if len(parts) == 2: 281s # called with 'foo.bar....' 281s package, obj = parts 281s > module = __import__(package, fromlist=[obj]) 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s 281s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s self = 281s 281s def setUp(self): 281s > self.sm = SessionManager( 281s kernel_manager=DummyMKM(), 281s contents_manager=ContentsManager(), 281s ) 281s 281s notebook/services/sessions/tests/test_sessionmanager.py:45: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 281s inst.setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 281s init(self) 281s notebook/traittypes.py:327: in instance_init 281s self._resolve_classes() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s klass = self._resolve_string(klass) 281s self.importable_klasses.append(klass) 281s except: 281s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s notebook/traittypes.py:339: TypeError 281s ____________________ TestSessionManager.test_list_sessions _____________________ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s > klass = self._resolve_string(klass) 281s 281s notebook/traittypes.py:336: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 281s return import_item(string) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s name = 'jupyter_server.services.contents.manager.ContentsManager' 281s 281s def import_item(name: str) -> Any: 281s """Import and return ``bar`` given the string ``foo.bar``. 281s 281s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 281s executing the code ``from foo import bar``. 281s 281s Parameters 281s ---------- 281s name : string 281s The fully qualified name of the module/package being imported. 281s 281s Returns 281s ------- 281s mod : module object 281s The module that was imported. 281s """ 281s if not isinstance(name, str): 281s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 281s parts = name.rsplit(".", 1) 281s if len(parts) == 2: 281s # called with 'foo.bar....' 281s package, obj = parts 281s > module = __import__(package, fromlist=[obj]) 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s 281s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s self = 281s 281s def setUp(self): 281s > self.sm = SessionManager( 281s kernel_manager=DummyMKM(), 281s contents_manager=ContentsManager(), 281s ) 281s 281s notebook/services/sessions/tests/test_sessionmanager.py:45: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 281s inst.setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 281s init(self) 281s notebook/traittypes.py:327: in instance_init 281s self._resolve_classes() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s klass = self._resolve_string(klass) 281s self.importable_klasses.append(klass) 281s except: 281s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s notebook/traittypes.py:339: TypeError 281s ______________ TestSessionManager.test_list_sessions_dead_kernel _______________ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s > klass = self._resolve_string(klass) 281s 281s notebook/traittypes.py:336: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 281s return import_item(string) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s name = 'jupyter_server.services.contents.manager.ContentsManager' 281s 281s def import_item(name: str) -> Any: 281s """Import and return ``bar`` given the string ``foo.bar``. 281s 281s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 281s executing the code ``from foo import bar``. 281s 281s Parameters 281s ---------- 281s name : string 281s The fully qualified name of the module/package being imported. 281s 281s Returns 281s ------- 281s mod : module object 281s The module that was imported. 281s """ 281s if not isinstance(name, str): 281s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 281s parts = name.rsplit(".", 1) 281s if len(parts) == 2: 281s # called with 'foo.bar....' 281s package, obj = parts 281s > module = __import__(package, fromlist=[obj]) 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s 281s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s self = 281s 281s def setUp(self): 281s > self.sm = SessionManager( 281s kernel_manager=DummyMKM(), 281s contents_manager=ContentsManager(), 281s ) 281s 281s notebook/services/sessions/tests/test_sessionmanager.py:45: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 281s inst.setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 281s init(self) 281s notebook/traittypes.py:327: in instance_init 281s self._resolve_classes() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s klass = self._resolve_string(klass) 281s self.importable_klasses.append(klass) 281s except: 281s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s notebook/traittypes.py:339: TypeError 281s ____________________ TestSessionManager.test_update_session ____________________ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s > klass = self._resolve_string(klass) 281s 281s notebook/traittypes.py:336: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 281s return import_item(string) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s name = 'jupyter_server.services.contents.manager.ContentsManager' 281s 281s def import_item(name: str) -> Any: 281s """Import and return ``bar`` given the string ``foo.bar``. 281s 281s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 281s executing the code ``from foo import bar``. 281s 281s Parameters 281s ---------- 281s name : string 281s The fully qualified name of the module/package being imported. 281s 281s Returns 281s ------- 281s mod : module object 281s The module that was imported. 281s """ 281s if not isinstance(name, str): 281s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 281s parts = name.rsplit(".", 1) 281s if len(parts) == 2: 281s # called with 'foo.bar....' 281s package, obj = parts 281s > module = __import__(package, fromlist=[obj]) 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s 281s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s self = 281s 281s def setUp(self): 281s > self.sm = SessionManager( 281s kernel_manager=DummyMKM(), 281s contents_manager=ContentsManager(), 281s ) 281s 281s notebook/services/sessions/tests/test_sessionmanager.py:45: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 281s inst.setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 281s init(self) 281s notebook/traittypes.py:327: in instance_init 281s self._resolve_classes() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s klass = self._resolve_string(klass) 281s self.importable_klasses.append(klass) 281s except: 281s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s notebook/traittypes.py:339: TypeError 281s _______________________________ test_help_output _______________________________ 281s 281s def test_help_output(): 281s """ipython notebook --help-all works""" 281s > check_help_all_output('notebook') 281s 281s notebook/tests/test_notebookapp.py:28: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s pkg = 'notebook', subcommand = None 281s 281s def check_help_all_output(pkg: str, subcommand: Sequence[str] | None = None) -> tuple[str, str]: 281s """test that `python -m PKG --help-all` works""" 281s cmd = [sys.executable, "-m", pkg] 281s if subcommand: 281s cmd.extend(subcommand) 281s cmd.append("--help-all") 281s out, err, rc = get_output_error_code(cmd) 281s > assert rc == 0, err 281s E AssertionError: Traceback (most recent call last): 281s E File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s E klass = self._resolve_string(klass) 281s E ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s E return import_item(string) 281s E ^^^^^^^^^^^^^^^^^^^ 281s E File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s E module = __import__(package, fromlist=[obj]) 281s E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s E 281s E During handling of the above exception, another exception occurred: 281s E 281s E Traceback (most recent call last): 281s E File "", line 198, in _run_module_as_main 281s E File "", line 88, in _run_code 281s E File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/__main__.py", line 3, in 281s E app.launch_new_instance() 281s E File "/usr/lib/python3/dist-packages/jupyter_core/application.py", line 282, in launch_instance 281s E super().launch_instance(argv=argv, **kwargs) 281s E File "/usr/lib/python3/dist-packages/traitlets/config/application.py", line 1073, in launch_instance 281s E app = cls.instance(**kwargs) 281s E ^^^^^^^^^^^^^^^^^^^^^^ 281s E File "/usr/lib/python3/dist-packages/traitlets/config/configurable.py", line 583, in instance 281s E inst = cls(*args, **kwargs) 281s E ^^^^^^^^^^^^^^^^^^^^ 281s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s E inst.setup_instance(*args, **kwargs) 281s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s E super(HasTraits, self).setup_instance(*args, **kwargs) 281s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s E init(self) 281s E File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s E self._resolve_classes() 281s E File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s E warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s /usr/lib/python3/dist-packages/traitlets/tests/utils.py:38: AssertionError 281s ____________________________ test_server_info_file _____________________________ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s > klass = self._resolve_string(klass) 281s 281s notebook/traittypes.py:235: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 281s return import_item(string) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s name = 'jupyter_server.contents.services.managers.ContentsManager' 281s 281s def import_item(name: str) -> Any: 281s """Import and return ``bar`` given the string ``foo.bar``. 281s 281s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 281s executing the code ``from foo import bar``. 281s 281s Parameters 281s ---------- 281s name : string 281s The fully qualified name of the module/package being imported. 281s 281s Returns 281s ------- 281s mod : module object 281s The module that was imported. 281s """ 281s if not isinstance(name, str): 281s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 281s parts = name.rsplit(".", 1) 281s if len(parts) == 2: 281s # called with 'foo.bar....' 281s package, obj = parts 281s > module = __import__(package, fromlist=[obj]) 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s 281s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s def test_server_info_file(): 281s td = TemporaryDirectory() 281s > nbapp = NotebookApp(runtime_dir=td.name, log=logging.getLogger()) 281s 281s notebook/tests/test_notebookapp.py:32: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 281s inst.setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 281s init(self) 281s notebook/traittypes.py:226: in instance_init 281s self._resolve_classes() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s klass = self._resolve_string(klass) 281s self.importable_klasses.append(klass) 281s except: 281s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s notebook/traittypes.py:238: TypeError 281s _________________________________ test_nb_dir __________________________________ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s > klass = self._resolve_string(klass) 281s 281s notebook/traittypes.py:235: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 281s return import_item(string) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s name = 'jupyter_server.contents.services.managers.ContentsManager' 281s 281s def import_item(name: str) -> Any: 281s """Import and return ``bar`` given the string ``foo.bar``. 281s 281s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 281s executing the code ``from foo import bar``. 281s 281s Parameters 281s ---------- 281s name : string 281s The fully qualified name of the module/package being imported. 281s 281s Returns 281s ------- 281s mod : module object 281s The module that was imported. 281s """ 281s if not isinstance(name, str): 281s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 281s parts = name.rsplit(".", 1) 281s if len(parts) == 2: 281s # called with 'foo.bar....' 281s package, obj = parts 281s > module = __import__(package, fromlist=[obj]) 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s 281s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s def test_nb_dir(): 281s with TemporaryDirectory() as td: 281s > app = NotebookApp(notebook_dir=td) 281s 281s notebook/tests/test_notebookapp.py:49: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 281s inst.setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 281s init(self) 281s notebook/traittypes.py:226: in instance_init 281s self._resolve_classes() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s klass = self._resolve_string(klass) 281s self.importable_klasses.append(klass) 281s except: 281s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s notebook/traittypes.py:238: TypeError 281s ____________________________ test_no_create_nb_dir _____________________________ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s > klass = self._resolve_string(klass) 281s 281s notebook/traittypes.py:235: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 281s return import_item(string) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s name = 'jupyter_server.contents.services.managers.ContentsManager' 281s 281s def import_item(name: str) -> Any: 281s """Import and return ``bar`` given the string ``foo.bar``. 281s 281s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 281s executing the code ``from foo import bar``. 281s 281s Parameters 281s ---------- 281s name : string 281s The fully qualified name of the module/package being imported. 281s 281s Returns 281s ------- 281s mod : module object 281s The module that was imported. 281s """ 281s if not isinstance(name, str): 281s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 281s parts = name.rsplit(".", 1) 281s if len(parts) == 2: 281s # called with 'foo.bar....' 281s package, obj = parts 281s > module = __import__(package, fromlist=[obj]) 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s 281s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s def test_no_create_nb_dir(): 281s with TemporaryDirectory() as td: 281s nbdir = os.path.join(td, 'notebooks') 281s > app = NotebookApp() 281s 281s notebook/tests/test_notebookapp.py:55: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 281s inst.setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 281s init(self) 281s notebook/traittypes.py:226: in instance_init 281s self._resolve_classes() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s klass = self._resolve_string(klass) 281s self.importable_klasses.append(klass) 281s except: 281s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s notebook/traittypes.py:238: TypeError 281s _____________________________ test_missing_nb_dir ______________________________ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s > klass = self._resolve_string(klass) 281s 281s notebook/traittypes.py:235: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 281s return import_item(string) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s name = 'jupyter_server.contents.services.managers.ContentsManager' 281s 281s def import_item(name: str) -> Any: 281s """Import and return ``bar`` given the string ``foo.bar``. 281s 281s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 281s executing the code ``from foo import bar``. 281s 281s Parameters 281s ---------- 281s name : string 281s The fully qualified name of the module/package being imported. 281s 281s Returns 281s ------- 281s mod : module object 281s The module that was imported. 281s """ 281s if not isinstance(name, str): 281s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 281s parts = name.rsplit(".", 1) 281s if len(parts) == 2: 281s # called with 'foo.bar....' 281s package, obj = parts 281s > module = __import__(package, fromlist=[obj]) 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s 281s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s def test_missing_nb_dir(): 281s with TemporaryDirectory() as td: 281s nbdir = os.path.join(td, 'notebook', 'dir', 'is', 'missing') 281s > app = NotebookApp() 281s 281s notebook/tests/test_notebookapp.py:62: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 281s inst.setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 281s init(self) 281s notebook/traittypes.py:226: in instance_init 281s self._resolve_classes() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s klass = self._resolve_string(klass) 281s self.importable_klasses.append(klass) 281s except: 281s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s notebook/traittypes.py:238: TypeError 281s _____________________________ test_invalid_nb_dir ______________________________ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s > klass = self._resolve_string(klass) 281s 281s notebook/traittypes.py:235: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 281s return import_item(string) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s name = 'jupyter_server.contents.services.managers.ContentsManager' 281s 281s def import_item(name: str) -> Any: 281s """Import and return ``bar`` given the string ``foo.bar``. 281s 281s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 281s executing the code ``from foo import bar``. 281s 281s Parameters 281s ---------- 281s name : string 281s The fully qualified name of the module/package being imported. 281s 281s Returns 281s ------- 281s mod : module object 281s The module that was imported. 281s """ 281s if not isinstance(name, str): 281s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 281s parts = name.rsplit(".", 1) 281s if len(parts) == 2: 281s # called with 'foo.bar....' 281s package, obj = parts 281s > module = __import__(package, fromlist=[obj]) 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s 281s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s def test_invalid_nb_dir(): 281s with NamedTemporaryFile() as tf: 281s > app = NotebookApp() 281s 281s notebook/tests/test_notebookapp.py:68: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 281s inst.setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 281s init(self) 281s notebook/traittypes.py:226: in instance_init 281s self._resolve_classes() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s klass = self._resolve_string(klass) 281s self.importable_klasses.append(klass) 281s except: 281s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s notebook/traittypes.py:238: TypeError 281s ____________________________ test_nb_dir_with_slash ____________________________ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s > klass = self._resolve_string(klass) 281s 281s notebook/traittypes.py:235: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 281s return import_item(string) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s name = 'jupyter_server.contents.services.managers.ContentsManager' 281s 281s def import_item(name: str) -> Any: 281s """Import and return ``bar`` given the string ``foo.bar``. 281s 281s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 281s executing the code ``from foo import bar``. 281s 281s Parameters 281s ---------- 281s name : string 281s The fully qualified name of the module/package being imported. 281s 281s Returns 281s ------- 281s mod : module object 281s The module that was imported. 281s """ 281s if not isinstance(name, str): 281s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 281s parts = name.rsplit(".", 1) 281s if len(parts) == 2: 281s # called with 'foo.bar....' 281s package, obj = parts 281s > module = __import__(package, fromlist=[obj]) 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s 281s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s def test_nb_dir_with_slash(): 281s with TemporaryDirectory(suffix="_slash" + os.sep) as td: 281s > app = NotebookApp(notebook_dir=td) 281s 281s notebook/tests/test_notebookapp.py:74: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 281s inst.setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 281s init(self) 281s notebook/traittypes.py:226: in instance_init 281s self._resolve_classes() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s klass = self._resolve_string(klass) 281s self.importable_klasses.append(klass) 281s except: 281s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s notebook/traittypes.py:238: TypeError 281s _______________________________ test_nb_dir_root _______________________________ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s > klass = self._resolve_string(klass) 281s 281s notebook/traittypes.py:235: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 281s return import_item(string) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s name = 'jupyter_server.contents.services.managers.ContentsManager' 281s 281s def import_item(name: str) -> Any: 281s """Import and return ``bar`` given the string ``foo.bar``. 281s 281s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 281s executing the code ``from foo import bar``. 281s 281s Parameters 281s ---------- 281s name : string 281s The fully qualified name of the module/package being imported. 281s 281s Returns 281s ------- 281s mod : module object 281s The module that was imported. 281s """ 281s if not isinstance(name, str): 281s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 281s parts = name.rsplit(".", 1) 281s if len(parts) == 2: 281s # called with 'foo.bar....' 281s package, obj = parts 281s > module = __import__(package, fromlist=[obj]) 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s 281s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s def test_nb_dir_root(): 281s root = os.path.abspath(os.sep) # gets the right value on Windows, Posix 281s > app = NotebookApp(notebook_dir=root) 281s 281s notebook/tests/test_notebookapp.py:79: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 281s inst.setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 281s init(self) 281s notebook/traittypes.py:226: in instance_init 281s self._resolve_classes() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s klass = self._resolve_string(klass) 281s self.importable_klasses.append(klass) 281s except: 281s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s notebook/traittypes.py:238: TypeError 281s _____________________________ test_generate_config _____________________________ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s > klass = self._resolve_string(klass) 281s 281s notebook/traittypes.py:235: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 281s return import_item(string) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s name = 'jupyter_server.contents.services.managers.ContentsManager' 281s 281s def import_item(name: str) -> Any: 281s """Import and return ``bar`` given the string ``foo.bar``. 281s 281s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 281s executing the code ``from foo import bar``. 281s 281s Parameters 281s ---------- 281s name : string 281s The fully qualified name of the module/package being imported. 281s 281s Returns 281s ------- 281s mod : module object 281s The module that was imported. 281s """ 281s if not isinstance(name, str): 281s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 281s parts = name.rsplit(".", 1) 281s if len(parts) == 2: 281s # called with 'foo.bar....' 281s package, obj = parts 281s > module = __import__(package, fromlist=[obj]) 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s 281s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s def test_generate_config(): 281s with TemporaryDirectory() as td: 281s > app = NotebookApp(config_dir=td) 281s 281s notebook/tests/test_notebookapp.py:84: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 281s inst.setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 281s init(self) 281s notebook/traittypes.py:226: in instance_init 281s self._resolve_classes() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s klass = self._resolve_string(klass) 281s self.importable_klasses.append(klass) 281s except: 281s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s notebook/traittypes.py:238: TypeError 281s ____________________________ test_notebook_password ____________________________ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s > klass = self._resolve_string(klass) 281s 281s notebook/traittypes.py:235: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 281s return import_item(string) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s name = 'jupyter_server.contents.services.managers.ContentsManager' 281s 281s def import_item(name: str) -> Any: 281s """Import and return ``bar`` given the string ``foo.bar``. 281s 281s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 281s executing the code ``from foo import bar``. 281s 281s Parameters 281s ---------- 281s name : string 281s The fully qualified name of the module/package being imported. 281s 281s Returns 281s ------- 281s mod : module object 281s The module that was imported. 281s """ 281s if not isinstance(name, str): 281s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 281s parts = name.rsplit(".", 1) 281s if len(parts) == 2: 281s # called with 'foo.bar....' 281s package, obj = parts 281s > module = __import__(package, fromlist=[obj]) 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s 281s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s def test_notebook_password(): 281s password = 'secret' 281s with TemporaryDirectory() as td: 281s with patch.dict('os.environ', { 281s 'JUPYTER_CONFIG_DIR': td, 281s }), patch.object(getpass, 'getpass', return_value=password): 281s app = notebookapp.NotebookPasswordApp(log_level=logging.ERROR) 281s app.initialize([]) 281s app.start() 281s > nb = NotebookApp() 281s 281s notebook/tests/test_notebookapp.py:133: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 281s inst.setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 281s init(self) 281s notebook/traittypes.py:226: in instance_init 281s self._resolve_classes() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s klass = self._resolve_string(klass) 281s self.importable_klasses.append(klass) 281s except: 281s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s notebook/traittypes.py:238: TypeError 281s _________________ TestInstallServerExtension.test_merge_config _________________ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s > klass = self._resolve_string(klass) 281s 281s notebook/traittypes.py:235: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 281s return import_item(string) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s name = 'jupyter_server.contents.services.managers.ContentsManager' 281s 281s def import_item(name: str) -> Any: 281s """Import and return ``bar`` given the string ``foo.bar``. 281s 281s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 281s executing the code ``from foo import bar``. 281s 281s Parameters 281s ---------- 281s name : string 281s The fully qualified name of the module/package being imported. 281s 281s Returns 281s ------- 281s mod : module object 281s The module that was imported. 281s """ 281s if not isinstance(name, str): 281s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 281s parts = name.rsplit(".", 1) 281s if len(parts) == 2: 281s # called with 'foo.bar....' 281s package, obj = parts 281s > module = __import__(package, fromlist=[obj]) 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s 281s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s self = 281s 281s def test_merge_config(self): 281s # enabled at sys level 281s mock_sys = self._inject_mock_extension('mockext_sys') 281s # enabled at sys, disabled at user 281s mock_both = self._inject_mock_extension('mockext_both') 281s # enabled at user 281s mock_user = self._inject_mock_extension('mockext_user') 281s # enabled at Python 281s mock_py = self._inject_mock_extension('mockext_py') 281s 281s toggle_serverextension_python('mockext_sys', enabled=True, user=False) 281s toggle_serverextension_python('mockext_user', enabled=True, user=True) 281s toggle_serverextension_python('mockext_both', enabled=True, user=False) 281s toggle_serverextension_python('mockext_both', enabled=False, user=True) 281s 281s > app = NotebookApp(nbserver_extensions={'mockext_py': True}) 281s 281s notebook/tests/test_serverextensions.py:147: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 281s inst.setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 281s init(self) 281s notebook/traittypes.py:226: in instance_init 281s self._resolve_classes() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s klass = self._resolve_string(klass) 281s self.importable_klasses.append(klass) 281s except: 281s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s notebook/traittypes.py:238: TypeError 281s _________________ TestOrderedServerExtension.test_load_ordered _________________ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s > klass = self._resolve_string(klass) 281s 281s notebook/traittypes.py:235: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 281s return import_item(string) 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s name = 'jupyter_server.contents.services.managers.ContentsManager' 281s 281s def import_item(name: str) -> Any: 281s """Import and return ``bar`` given the string ``foo.bar``. 281s 281s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 281s executing the code ``from foo import bar``. 281s 281s Parameters 281s ---------- 281s name : string 281s The fully qualified name of the module/package being imported. 281s 281s Returns 281s ------- 281s mod : module object 281s The module that was imported. 281s """ 281s if not isinstance(name, str): 281s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 281s parts = name.rsplit(".", 1) 281s if len(parts) == 2: 281s # called with 'foo.bar....' 281s package, obj = parts 281s > module = __import__(package, fromlist=[obj]) 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s 281s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 281s 281s During handling of the above exception, another exception occurred: 281s 281s self = 281s 281s def test_load_ordered(self): 281s > app = NotebookApp() 281s 281s notebook/tests/test_serverextensions.py:189: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 281s inst.setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 281s init(self) 281s notebook/traittypes.py:226: in instance_init 281s self._resolve_classes() 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s self = 281s 281s def _resolve_classes(self): 281s # Resolve all string names to actual classes. 281s self.importable_klasses = [] 281s for klass in self.klasses: 281s if isinstance(klass, str): 281s try: 281s klass = self._resolve_string(klass) 281s self.importable_klasses.append(klass) 281s except: 281s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s notebook/traittypes.py:238: TypeError 281s _______________________________ test_help_output _______________________________ 281s 281s def test_help_output(): 281s """jupyter notebook --help-all works""" 281s # FIXME: will be notebook 281s > check_help_all_output('notebook') 281s 281s notebook/tests/test_utils.py:21: 281s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 281s 281s pkg = 'notebook', subcommand = None 281s 281s def check_help_all_output(pkg: str, subcommand: Sequence[str] | None = None) -> tuple[str, str]: 281s """test that `python -m PKG --help-all` works""" 281s cmd = [sys.executable, "-m", pkg] 281s if subcommand: 281s cmd.extend(subcommand) 281s cmd.append("--help-all") 281s out, err, rc = get_output_error_code(cmd) 281s > assert rc == 0, err 281s E AssertionError: Traceback (most recent call last): 281s E File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s E klass = self._resolve_string(klass) 281s E ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s E return import_item(string) 281s E ^^^^^^^^^^^^^^^^^^^ 281s E File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s E module = __import__(package, fromlist=[obj]) 281s E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s E ModuleNotFoundError: No module named 'jupyter_server' 281s E 281s E During handling of the above exception, another exception occurred: 281s E 281s E Traceback (most recent call last): 281s E File "", line 198, in _run_module_as_main 281s E File "", line 88, in _run_code 281s E File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/__main__.py", line 3, in 281s E app.launch_new_instance() 281s E File "/usr/lib/python3/dist-packages/jupyter_core/application.py", line 282, in launch_instance 281s E super().launch_instance(argv=argv, **kwargs) 281s E File "/usr/lib/python3/dist-packages/traitlets/config/application.py", line 1073, in launch_instance 281s E app = cls.instance(**kwargs) 281s E ^^^^^^^^^^^^^^^^^^^^^^ 281s E File "/usr/lib/python3/dist-packages/traitlets/config/configurable.py", line 583, in instance 281s E inst = cls(*args, **kwargs) 281s E ^^^^^^^^^^^^^^^^^^^^ 281s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s E inst.setup_instance(*args, **kwargs) 281s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s E super(HasTraits, self).setup_instance(*args, **kwargs) 281s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s E init(self) 281s E File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s E self._resolve_classes() 281s E File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s E warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s /usr/lib/python3/dist-packages/traitlets/tests/utils.py:38: AssertionError 281s =============================== warnings summary =============================== 281s notebook/nbextensions.py:15 281s /tmp/autopkgtest.FXI16z/build.fGX/src/notebook/nbextensions.py:15: DeprecationWarning: Jupyter is migrating its paths to use standard platformdirs 281s given by the platformdirs library. To remove this warning and 281s see the appropriate new directories, set the environment variable 281s `JUPYTER_PLATFORM_DIRS=1` and then run `jupyter --paths`. 281s The use of platformdirs will be the default in `jupyter_core` v6 281s from jupyter_core.paths import ( 281s 281s notebook/utils.py:280 281s notebook/utils.py:280 281s /tmp/autopkgtest.FXI16z/build.fGX/src/notebook/utils.py:280: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. 281s return LooseVersion(v) >= LooseVersion(check) 281s 281s notebook/_tz.py:29: 1 warning 281s notebook/services/sessions/tests/test_sessionmanager.py: 9 warnings 281s /tmp/autopkgtest.FXI16z/build.fGX/src/notebook/_tz.py:29: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC). 281s dt = unaware(*args, **kwargs) 281s 281s notebook/tests/test_notebookapp_integration.py:14 281s /tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/test_notebookapp_integration.py:14: PytestUnknownMarkWarning: Unknown pytest.mark.integration_tests - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html 281s pytestmark = pytest.mark.integration_tests 281s 281s notebook/auth/tests/test_login.py::LoginTest::test_next_bad 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-1 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/bundler/tests/test_bundler_api.py::BundleAPITest::test_bundler_import_error 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-2 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/services/api/tests/test_api.py::APITest::test_get_spec 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-3 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/services/config/tests/test_config_api.py::APITest::test_create_retrieve_config 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-4 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/services/contents/tests/test_contents_api.py::APITest::test_checkpoints 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-5 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_checkpoints 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-6 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/services/contents/tests/test_largefilemanager.py: 42 warnings 281s notebook/services/contents/tests/test_manager.py: 526 warnings 281s /tmp/autopkgtest.FXI16z/build.fGX/src/notebook/_tz.py:29: DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.fromtimestamp(timestamp, datetime.UTC). 281s dt = unaware(*args, **kwargs) 281s 281s notebook/services/kernels/tests/test_kernels_api.py::KernelAPITest::test_connections 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-7 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/services/kernels/tests/test_kernels_api.py::AsyncKernelAPITest::test_connections 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-8 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/services/kernels/tests/test_kernels_api.py::KernelFilterTest::test_config 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-9 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/services/kernels/tests/test_kernels_api.py::KernelCullingTest::test_culling 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-10 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_get_kernel_resource_file 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-11 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/services/nbconvert/tests/test_nbconvert_api.py::APITest::test_list_formats 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-12 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_create 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-13 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_create 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-14 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_create_terminal 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-15 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/terminal/tests/test_terminals_api.py::TerminalCullingTest::test_config 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-16 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/tests/test_files.py::FilesTest::test_contents_manager 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-17 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/tests/test_gateway.py::TestGateway::test_gateway_class_mappings 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-18 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/tests/test_nbextensions.py::TestInstallNBExtension::test_install_tar 281s notebook/tests/test_nbextensions.py::TestInstallNBExtension::test_install_tar 281s notebook/tests/test_nbextensions.py::TestInstallNBExtension::test_install_tar 281s /tmp/autopkgtest.FXI16z/build.fGX/src/notebook/nbextensions.py:154: DeprecationWarning: Python 3.14 will, by default, filter extracted tar archives and reject files or modify their metadata. Use the filter argument to control this behavior. 281s archive.extractall(nbext) 281s 281s notebook/tests/test_notebookapp.py::NotebookAppTests::test_list_running_servers 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-19 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/tests/test_notebookapp.py::NotebookUnixSocketTests::test_list_running_sock_servers 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-20 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/tests/test_notebookapp.py::NotebookAppJSONLoggingTests::test_log_json_enabled 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-21 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/tests/test_paths.py::RedirectTestCase::test_trailing_slash 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-22 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s notebook/tree/tests/test_tree_handler.py::TreeTest::test_redirect 281s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-23 (start_thread) 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 235, in _resolve_classes 281s klass = self._resolve_string(klass) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 281s return import_item(string) 281s ^^^^^^^^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 281s module = __import__(package, fromlist=[obj]) 281s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 281s ModuleNotFoundError: No module named 'jupyter_server' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 281s app = cls.notebook = NotebookApp( 281s ^^^^^^^^^^^^ 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 281s inst.setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 281s super(HasTraits, self).setup_instance(*args, **kwargs) 281s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 281s init(self) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 226, in instance_init 281s self._resolve_classes() 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/traittypes.py", line 238, in _resolve_classes 281s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 281s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 281s 281s During handling of the above exception, another exception occurred: 281s 281s Traceback (most recent call last): 281s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 281s self.run() 281s File "/usr/lib/python3.12/threading.py", line 1010, in run 281s self._target(*self._args, **self._kwargs) 281s File "/tmp/autopkgtest.FXI16z/build.fGX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 281s app.session_manager.close() 281s ^^^ 281s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 281s 281s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 281s 281s -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 281s =========================== short test summary info ============================ 281s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_bad_delete_session 281s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_bad_get_session 281s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_bad_update_session 281s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_delete_session 281s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_get_session 281s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_get_session_dead_kernel 281s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_list_sessions 281s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_list_sessions_dead_kernel 281s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_update_session 281s FAILED notebook/tests/test_notebookapp.py::test_help_output - AssertionError:... 281s FAILED notebook/tests/test_notebookapp.py::test_server_info_file - TypeError:... 281s FAILED notebook/tests/test_notebookapp.py::test_nb_dir - TypeError: warn() mi... 281s FAILED notebook/tests/test_notebookapp.py::test_no_create_nb_dir - TypeError:... 281s FAILED notebook/tests/test_notebookapp.py::test_missing_nb_dir - TypeError: w... 281s FAILED notebook/tests/test_notebookapp.py::test_invalid_nb_dir - TypeError: w... 281s FAILED notebook/tests/test_notebookapp.py::test_nb_dir_with_slash - TypeError... 281s FAILED notebook/tests/test_notebookapp.py::test_nb_dir_root - TypeError: warn... 281s FAILED notebook/tests/test_notebookapp.py::test_generate_config - TypeError: ... 281s FAILED notebook/tests/test_notebookapp.py::test_notebook_password - TypeError... 281s FAILED notebook/tests/test_serverextensions.py::TestInstallServerExtension::test_merge_config 281s FAILED notebook/tests/test_serverextensions.py::TestOrderedServerExtension::test_load_ordered 281s FAILED notebook/tests/test_utils.py::test_help_output - AssertionError: Trace... 281s ERROR notebook/auth/tests/test_login.py::LoginTest::test_next_bad - RuntimeEr... 281s ERROR notebook/auth/tests/test_login.py::LoginTest::test_next_ok - RuntimeErr... 281s ERROR notebook/bundler/tests/test_bundler_api.py::BundleAPITest::test_bundler_import_error 281s ERROR notebook/bundler/tests/test_bundler_api.py::BundleAPITest::test_bundler_invoke 281s ERROR notebook/bundler/tests/test_bundler_api.py::BundleAPITest::test_bundler_not_enabled 281s ERROR notebook/bundler/tests/test_bundler_api.py::BundleAPITest::test_missing_bundler_arg 281s ERROR notebook/bundler/tests/test_bundler_api.py::BundleAPITest::test_notebook_not_found 281s ERROR notebook/services/api/tests/test_api.py::APITest::test_get_spec - Runti... 281s ERROR notebook/services/api/tests/test_api.py::APITest::test_get_status - Run... 281s ERROR notebook/services/api/tests/test_api.py::APITest::test_no_track_activity 281s ERROR notebook/services/config/tests/test_config_api.py::APITest::test_create_retrieve_config 281s ERROR notebook/services/config/tests/test_config_api.py::APITest::test_get_unknown 281s ERROR notebook/services/config/tests/test_config_api.py::APITest::test_modify 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_checkpoints 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_checkpoints_separate_root 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy_400_hidden 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy_copy 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy_dir_400 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy_path 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy_put_400 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy_put_400_hidden 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_create_untitled 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_create_untitled_txt 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_delete_hidden_dir 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_delete_hidden_file 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_file_checkpoints 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_404_hidden 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_bad_type 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_binary_file_contents 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_contents_no_such_file 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_dir_no_content 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_nb_contents 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_nb_invalid 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_nb_no_content 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_text_file_contents 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_list_dirs 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_list_nonexistant_dir 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_list_notebooks 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_mkdir 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_mkdir_hidden_400 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_mkdir_untitled 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_rename 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_rename_400_hidden 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_rename_existing 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_save 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_upload 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_upload_b64 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_upload_txt 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_upload_txt_hidden 281s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_upload_v2 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_checkpoints 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_checkpoints_separate_root 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_config_did_something 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy_400_hidden 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy_copy 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy_dir_400 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy_path 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy_put_400 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy_put_400_hidden 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_create_untitled 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_create_untitled_txt 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_delete_hidden_dir 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_delete_hidden_file 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_file_checkpoints 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_404_hidden 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_bad_type 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_binary_file_contents 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_contents_no_such_file 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_dir_no_content 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_nb_contents 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_nb_invalid 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_nb_no_content 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_text_file_contents 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_list_dirs 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_list_nonexistant_dir 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_list_notebooks 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_mkdir 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_mkdir_hidden_400 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_mkdir_untitled 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_rename 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_rename_400_hidden 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_rename_existing 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_save 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_upload 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_upload_b64 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_upload_txt 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_upload_txt_hidden 281s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_upload_v2 281s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelAPITest::test_connections 281s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelAPITest::test_default_kernel 281s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelAPITest::test_kernel_handler 281s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelAPITest::test_main_kernel_handler 281s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelAPITest::test_no_kernels 281s ERROR notebook/services/kernels/tests/test_kernels_api.py::AsyncKernelAPITest::test_connections 281s ERROR notebook/services/kernels/tests/test_kernels_api.py::AsyncKernelAPITest::test_default_kernel 281s ERROR notebook/services/kernels/tests/test_kernels_api.py::AsyncKernelAPITest::test_kernel_handler 281s ERROR notebook/services/kernels/tests/test_kernels_api.py::AsyncKernelAPITest::test_main_kernel_handler 281s ERROR notebook/services/kernels/tests/test_kernels_api.py::AsyncKernelAPITest::test_no_kernels 281s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelFilterTest::test_config 281s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelCullingTest::test_culling 281s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_get_kernel_resource_file 281s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_get_kernelspec 281s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_get_kernelspec_spaces 281s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_get_nonexistant_kernelspec 281s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_get_nonexistant_resource 281s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_list_kernelspecs 281s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_list_kernelspecs_bad 281s ERROR notebook/services/nbconvert/tests/test_nbconvert_api.py::APITest::test_list_formats 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_create 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_create_console_session 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_create_deprecated 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_create_file_session 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_create_with_kernel_id 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_delete 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_modify_kernel_id 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_modify_kernel_name 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_modify_path 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_modify_path_deprecated 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_modify_type 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_create 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_create_console_session 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_create_deprecated 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_create_file_session 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_create_with_kernel_id 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_delete 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_modify_kernel_id 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_modify_kernel_name 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_modify_path 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_modify_path_deprecated 281s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_modify_type 281s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_create_terminal 281s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_create_terminal_via_get 281s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_create_terminal_with_name 281s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_no_terminals 281s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_terminal_handler 281s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_terminal_root_handler 281s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalCullingTest::test_config 281s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalCullingTest::test_culling 281s ERROR notebook/tests/test_files.py::FilesTest::test_contents_manager - Runtim... 281s ERROR notebook/tests/test_files.py::FilesTest::test_download - RuntimeError: ... 281s ERROR notebook/tests/test_files.py::FilesTest::test_hidden_files - RuntimeErr... 281s ERROR notebook/tests/test_files.py::FilesTest::test_old_files_redirect - Runt... 281s ERROR notebook/tests/test_files.py::FilesTest::test_view_html - RuntimeError:... 281s ERROR notebook/tests/test_gateway.py::TestGateway::test_gateway_class_mappings 281s ERROR notebook/tests/test_gateway.py::TestGateway::test_gateway_get_kernelspecs 281s ERROR notebook/tests/test_gateway.py::TestGateway::test_gateway_get_named_kernelspec 281s ERROR notebook/tests/test_gateway.py::TestGateway::test_gateway_kernel_lifecycle 281s ERROR notebook/tests/test_gateway.py::TestGateway::test_gateway_options - Run... 281s ERROR notebook/tests/test_gateway.py::TestGateway::test_gateway_session_lifecycle 281s ERROR notebook/tests/test_notebookapp.py::NotebookAppTests::test_list_running_servers 281s ERROR notebook/tests/test_notebookapp.py::NotebookAppTests::test_log_json_default 281s ERROR notebook/tests/test_notebookapp.py::NotebookAppTests::test_validate_log_json 281s ERROR notebook/tests/test_notebookapp.py::NotebookUnixSocketTests::test_list_running_sock_servers 281s ERROR notebook/tests/test_notebookapp.py::NotebookUnixSocketTests::test_run 281s ERROR notebook/tests/test_notebookapp.py::NotebookAppJSONLoggingTests::test_log_json_enabled 281s ERROR notebook/tests/test_notebookapp.py::NotebookAppJSONLoggingTests::test_validate_log_json 281s ERROR notebook/tests/test_paths.py::RedirectTestCase::test_trailing_slash - R... 281s ERROR notebook/tree/tests/test_tree_handler.py::TreeTest::test_redirect - Run... 281s = 22 failed, 123 passed, 20 skipped, 5 deselected, 608 warnings, 160 errors in 36.98s = 282s autopkgtest [10:32:01]: test pytest: -----------------------] 286s autopkgtest [10:32:05]: test pytest: - - - - - - - - - - results - - - - - - - - - - 286s pytest FAIL non-zero exit status 1 290s autopkgtest [10:32:09]: test command1: preparing testbed 315s autopkgtest [10:32:34]: testbed dpkg architecture: armhf 317s autopkgtest [10:32:36]: testbed apt version: 2.9.5 317s autopkgtest [10:32:36]: @@@@@@@@@@@@@@@@@@@@ test bed setup 325s Get:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease [110 kB] 325s Get:2 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse Sources [2576 B] 325s Get:3 http://ftpmaster.internal/ubuntu oracular-proposed/main Sources [36.1 kB] 325s Get:4 http://ftpmaster.internal/ubuntu oracular-proposed/universe Sources [389 kB] 325s Get:5 http://ftpmaster.internal/ubuntu oracular-proposed/restricted Sources [7052 B] 325s Get:6 http://ftpmaster.internal/ubuntu oracular-proposed/main armhf Packages [34.8 kB] 325s Get:7 http://ftpmaster.internal/ubuntu oracular-proposed/restricted armhf Packages [1860 B] 325s Get:8 http://ftpmaster.internal/ubuntu oracular-proposed/universe armhf Packages [293 kB] 325s Get:9 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse armhf Packages [2528 B] 325s Fetched 877 kB in 1s (1060 kB/s) 325s Reading package lists... 340s tee: /proc/self/fd/2: Permission denied 361s Hit:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease 361s Hit:2 http://ftpmaster.internal/ubuntu oracular InRelease 361s Hit:3 http://ftpmaster.internal/ubuntu oracular-updates InRelease 361s Hit:4 http://ftpmaster.internal/ubuntu oracular-security InRelease 362s Reading package lists... 362s Reading package lists... 363s Building dependency tree... 363s Reading state information... 363s Calculating upgrade... 364s The following packages will be upgraded: 364s libldap-common libldap2 364s 2 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 364s Need to get 203 kB of archives. 364s After this operation, 0 B of additional disk space will be used. 364s Get:1 http://ftpmaster.internal/ubuntu oracular/main armhf libldap-common all 2.6.7+dfsg-1~exp1ubuntu9 [31.5 kB] 364s Get:2 http://ftpmaster.internal/ubuntu oracular/main armhf libldap2 armhf 2.6.7+dfsg-1~exp1ubuntu9 [171 kB] 364s Fetched 203 kB in 0s (482 kB/s) 364s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58402 files and directories currently installed.) 364s Preparing to unpack .../libldap-common_2.6.7+dfsg-1~exp1ubuntu9_all.deb ... 364s Unpacking libldap-common (2.6.7+dfsg-1~exp1ubuntu9) over (2.6.7+dfsg-1~exp1ubuntu8) ... 365s Preparing to unpack .../libldap2_2.6.7+dfsg-1~exp1ubuntu9_armhf.deb ... 365s Unpacking libldap2:armhf (2.6.7+dfsg-1~exp1ubuntu9) over (2.6.7+dfsg-1~exp1ubuntu8) ... 365s Setting up libldap-common (2.6.7+dfsg-1~exp1ubuntu9) ... 365s Setting up libldap2:armhf (2.6.7+dfsg-1~exp1ubuntu9) ... 365s Processing triggers for man-db (2.12.1-2) ... 365s Processing triggers for libc-bin (2.39-0ubuntu9) ... 365s Reading package lists... 365s Building dependency tree... 365s Reading state information... 366s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 368s autopkgtest [10:33:27]: rebooting testbed after setup commands that affected boot 439s Reading package lists... 439s Building dependency tree... 439s Reading state information... 440s Starting pkgProblemResolver with broken count: 0 440s Starting 2 pkgProblemResolver with broken count: 0 440s Done 441s The following additional packages will be installed: 441s fonts-font-awesome fonts-glyphicons-halflings fonts-lato fonts-mathjax gdb 441s jupyter-core jupyter-notebook libbabeltrace1 libc6-dbg libdebuginfod-common 441s libdebuginfod1t64 libdw1t64 libjs-backbone libjs-bootstrap 441s libjs-bootstrap-tour libjs-codemirror libjs-es6-promise libjs-jed 441s libjs-jquery libjs-jquery-typeahead libjs-jquery-ui libjs-marked 441s libjs-mathjax libjs-moment libjs-requirejs libjs-requirejs-text 441s libjs-sphinxdoc libjs-text-encoding libjs-underscore libjs-xterm libnorm1t64 441s libpgm-5.3-0t64 libpython3.12t64 libsodium23 libsource-highlight-common 441s libsource-highlight4t64 libzmq5 node-jed python-notebook-doc 441s python-tinycss2-common python3-argon2 python3-asttokens python3-bleach 441s python3-bs4 python3-bytecode python3-comm python3-coverage python3-dateutil 441s python3-debugpy python3-decorator python3-defusedxml python3-entrypoints 441s python3-executing python3-fastjsonschema python3-html5lib python3-ipykernel 441s python3-ipython python3-ipython-genutils python3-jedi python3-jupyter-client 441s python3-jupyter-core python3-jupyterlab-pygments python3-matplotlib-inline 441s python3-mistune python3-nbclient python3-nbconvert python3-nbformat 441s python3-nest-asyncio python3-notebook python3-packaging 441s python3-pandocfilters python3-parso python3-pexpect python3-platformdirs 441s python3-prometheus-client python3-prompt-toolkit python3-psutil 441s python3-ptyprocess python3-pure-eval python3-py python3-pydevd 441s python3-send2trash python3-soupsieve python3-stack-data python3-terminado 441s python3-tinycss2 python3-tornado python3-traitlets python3-typeshed 441s python3-wcwidth python3-webencodings python3-zmq sphinx-rtd-theme-common 441s Suggested packages: 441s gdb-doc gdbserver libjs-jquery-lazyload libjs-json libjs-jquery-ui-docs 441s fonts-mathjax-extras fonts-stix libjs-mathjax-doc python-argon2-doc 441s python-bleach-doc python-bytecode-doc python-coverage-doc 441s python-fastjsonschema-doc python3-genshi python3-lxml python-ipython-doc 441s python3-pip python-nbconvert-doc texlive-fonts-recommended 441s texlive-plain-generic texlive-xetex python-pexpect-doc subversion 441s python3-pytest pydevd python-terminado-doc python-tinycss2-doc 441s python3-pycurl python-tornado-doc python3-twisted 441s Recommended packages: 441s javascript-common python3-lxml python3-matplotlib pandoc python3-ipywidgets 441s The following NEW packages will be installed: 441s autopkgtest-satdep fonts-font-awesome fonts-glyphicons-halflings fonts-lato 441s fonts-mathjax gdb jupyter-core jupyter-notebook libbabeltrace1 libc6-dbg 441s libdebuginfod-common libdebuginfod1t64 libdw1t64 libjs-backbone 441s libjs-bootstrap libjs-bootstrap-tour libjs-codemirror libjs-es6-promise 441s libjs-jed libjs-jquery libjs-jquery-typeahead libjs-jquery-ui libjs-marked 441s libjs-mathjax libjs-moment libjs-requirejs libjs-requirejs-text 441s libjs-sphinxdoc libjs-text-encoding libjs-underscore libjs-xterm libnorm1t64 441s libpgm-5.3-0t64 libpython3.12t64 libsodium23 libsource-highlight-common 441s libsource-highlight4t64 libzmq5 node-jed python-notebook-doc 441s python-tinycss2-common python3-argon2 python3-asttokens python3-bleach 441s python3-bs4 python3-bytecode python3-comm python3-coverage python3-dateutil 441s python3-debugpy python3-decorator python3-defusedxml python3-entrypoints 441s python3-executing python3-fastjsonschema python3-html5lib python3-ipykernel 441s python3-ipython python3-ipython-genutils python3-jedi python3-jupyter-client 441s python3-jupyter-core python3-jupyterlab-pygments python3-matplotlib-inline 441s python3-mistune python3-nbclient python3-nbconvert python3-nbformat 441s python3-nest-asyncio python3-notebook python3-packaging 441s python3-pandocfilters python3-parso python3-pexpect python3-platformdirs 441s python3-prometheus-client python3-prompt-toolkit python3-psutil 441s python3-ptyprocess python3-pure-eval python3-py python3-pydevd 441s python3-send2trash python3-soupsieve python3-stack-data python3-terminado 441s python3-tinycss2 python3-tornado python3-traitlets python3-typeshed 441s python3-wcwidth python3-webencodings python3-zmq sphinx-rtd-theme-common 441s 0 upgraded, 94 newly installed, 0 to remove and 0 not upgraded. 441s Need to get 39.0 MB/39.0 MB of archives. 441s After this operation, 171 MB of additional disk space will be used. 441s Get:1 /tmp/autopkgtest.FXI16z/2-autopkgtest-satdep.deb autopkgtest-satdep armhf 0 [724 B] 441s Get:2 http://ftpmaster.internal/ubuntu oracular/main armhf fonts-lato all 2.015-1 [2781 kB] 442s Get:3 http://ftpmaster.internal/ubuntu oracular/main armhf libdebuginfod-common all 0.191-1 [14.6 kB] 442s Get:4 http://ftpmaster.internal/ubuntu oracular/main armhf fonts-font-awesome all 5.0.10+really4.7.0~dfsg-4.1 [516 kB] 442s Get:5 http://ftpmaster.internal/ubuntu oracular/universe armhf fonts-glyphicons-halflings all 1.009~3.4.1+dfsg-3 [118 kB] 442s Get:6 http://ftpmaster.internal/ubuntu oracular/main armhf fonts-mathjax all 2.7.9+dfsg-1 [2208 kB] 442s Get:7 http://ftpmaster.internal/ubuntu oracular/main armhf libdw1t64 armhf 0.191-1 [238 kB] 442s Get:8 http://ftpmaster.internal/ubuntu oracular/main armhf libbabeltrace1 armhf 1.5.11-3build3 [154 kB] 442s Get:9 http://ftpmaster.internal/ubuntu oracular/main armhf libdebuginfod1t64 armhf 0.191-1 [15.8 kB] 442s Get:10 http://ftpmaster.internal/ubuntu oracular/main armhf libpython3.12t64 armhf 3.12.4-1 [2059 kB] 442s Get:11 http://ftpmaster.internal/ubuntu oracular/main armhf libsource-highlight-common all 3.1.9-4.3build1 [64.2 kB] 442s Get:12 http://ftpmaster.internal/ubuntu oracular/main armhf libsource-highlight4t64 armhf 3.1.9-4.3build1 [306 kB] 442s Get:13 http://ftpmaster.internal/ubuntu oracular/main armhf libc6-dbg armhf 2.39-0ubuntu9 [6017 kB] 442s Get:14 http://ftpmaster.internal/ubuntu oracular/main armhf gdb armhf 15.0.50.20240403-0ubuntu1 [3852 kB] 442s Get:15 http://ftpmaster.internal/ubuntu oracular/main armhf python3-platformdirs all 4.2.1-1 [16.3 kB] 442s Get:16 http://ftpmaster.internal/ubuntu oracular-proposed/universe armhf python3-traitlets all 5.14.3-1 [71.3 kB] 442s Get:17 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-jupyter-core all 5.3.2-2 [25.5 kB] 442s Get:18 http://ftpmaster.internal/ubuntu oracular/universe armhf jupyter-core all 5.3.2-2 [4038 B] 442s Get:19 http://ftpmaster.internal/ubuntu oracular/main armhf libjs-underscore all 1.13.4~dfsg+~1.11.4-3 [118 kB] 442s Get:20 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-backbone all 1.4.1~dfsg+~1.4.15-3 [185 kB] 442s Get:21 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-bootstrap all 3.4.1+dfsg-3 [129 kB] 442s Get:22 http://ftpmaster.internal/ubuntu oracular/main armhf libjs-jquery all 3.6.1+dfsg+~3.5.14-1 [328 kB] 442s Get:23 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-bootstrap-tour all 0.12.0+dfsg-5 [21.4 kB] 442s Get:24 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-codemirror all 5.65.0+~cs5.83.9-3 [755 kB] 442s Get:25 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-es6-promise all 4.2.8-12 [14.1 kB] 442s Get:26 http://ftpmaster.internal/ubuntu oracular/universe armhf node-jed all 1.1.1-4 [15.2 kB] 442s Get:27 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-jed all 1.1.1-4 [2584 B] 442s Get:28 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-jquery-typeahead all 2.11.0+dfsg1-3 [48.9 kB] 442s Get:29 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-jquery-ui all 1.13.2+dfsg-1 [252 kB] 442s Get:30 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-marked all 4.2.3+ds+~4.0.7-3 [36.2 kB] 442s Get:31 http://ftpmaster.internal/ubuntu oracular/main armhf libjs-mathjax all 2.7.9+dfsg-1 [5665 kB] 442s Get:32 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-moment all 2.29.4+ds-1 [147 kB] 442s Get:33 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-requirejs all 2.3.6+ds+~2.1.37-1 [201 kB] 442s Get:34 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-requirejs-text all 2.0.12-1.1 [9056 B] 442s Get:35 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-text-encoding all 0.7.0-5 [140 kB] 442s Get:36 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-xterm all 5.3.0-2 [476 kB] 442s Get:37 http://ftpmaster.internal/ubuntu oracular/main armhf python3-ptyprocess all 0.7.0-5 [15.1 kB] 442s Get:38 http://ftpmaster.internal/ubuntu oracular/main armhf python3-tornado armhf 6.4.1-1 [298 kB] 442s Get:39 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-terminado all 0.18.1-1 [13.2 kB] 442s Get:40 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-argon2 armhf 21.1.0-2build1 [19.9 kB] 442s Get:41 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-comm all 0.2.1-1 [7016 B] 442s Get:42 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-bytecode all 0.15.1-3 [44.7 kB] 442s Get:43 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-coverage armhf 7.4.4+dfsg1-0ubuntu2 [146 kB] 442s Get:44 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-pydevd armhf 2.10.0+ds-10ubuntu1 [613 kB] 442s Get:45 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-debugpy all 1.8.0+ds-4ubuntu4 [67.6 kB] 442s Get:46 http://ftpmaster.internal/ubuntu oracular/main armhf python3-decorator all 5.1.1-5 [10.1 kB] 442s Get:47 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-parso all 0.8.3-1 [67.2 kB] 442s Get:48 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-typeshed all 0.0~git20231111.6764465-3 [1274 kB] 442s Get:49 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-jedi all 0.19.1+ds1-1 [693 kB] 442s Get:50 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-matplotlib-inline all 0.1.6-2 [8784 B] 442s Get:51 http://ftpmaster.internal/ubuntu oracular/main armhf python3-pexpect all 4.9-2 [48.1 kB] 442s Get:52 http://ftpmaster.internal/ubuntu oracular/main armhf python3-wcwidth all 0.2.5+dfsg1-1.1ubuntu1 [22.5 kB] 442s Get:53 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-prompt-toolkit all 3.0.46-1 [256 kB] 442s Get:54 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-asttokens all 2.4.1-1 [20.9 kB] 442s Get:55 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-executing all 2.0.1-0.1 [23.3 kB] 442s Get:56 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-pure-eval all 0.2.2-2 [11.1 kB] 442s Get:57 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-stack-data all 0.6.3-1 [22.0 kB] 442s Get:58 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-ipython all 8.20.0-1ubuntu1 [561 kB] 442s Get:59 http://ftpmaster.internal/ubuntu oracular/main armhf python3-dateutil all 2.9.0-2 [80.3 kB] 443s Get:60 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-entrypoints all 0.4-2 [7146 B] 443s Get:61 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-nest-asyncio all 1.5.4-1 [6256 B] 443s Get:62 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-py all 1.11.0-2 [72.7 kB] 443s Get:63 http://ftpmaster.internal/ubuntu oracular/universe armhf libnorm1t64 armhf 1.5.9+dfsg-3.1build1 [206 kB] 443s Get:64 http://ftpmaster.internal/ubuntu oracular/universe armhf libpgm-5.3-0t64 armhf 5.3.128~dfsg-2.1build1 [171 kB] 443s Get:65 http://ftpmaster.internal/ubuntu oracular/main armhf libsodium23 armhf 1.0.18-1build3 [139 kB] 443s Get:66 http://ftpmaster.internal/ubuntu oracular/universe armhf libzmq5 armhf 4.3.5-1build2 [262 kB] 443s Get:67 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-zmq armhf 24.0.1-5build1 [275 kB] 443s Get:68 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-jupyter-client all 7.4.9-2ubuntu1 [90.5 kB] 443s Get:69 http://ftpmaster.internal/ubuntu oracular/main armhf python3-packaging all 24.0-1 [41.1 kB] 443s Get:70 http://ftpmaster.internal/ubuntu oracular/main armhf python3-psutil armhf 5.9.8-2build2 [194 kB] 443s Get:71 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-ipykernel all 6.29.3-1ubuntu1 [82.6 kB] 443s Get:72 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-ipython-genutils all 0.2.0-6 [22.0 kB] 443s Get:73 http://ftpmaster.internal/ubuntu oracular/main armhf python3-webencodings all 0.5.1-5 [11.5 kB] 443s Get:74 http://ftpmaster.internal/ubuntu oracular/main armhf python3-html5lib all 1.1-6 [88.8 kB] 443s Get:75 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-bleach all 6.1.0-2 [49.6 kB] 443s Get:76 http://ftpmaster.internal/ubuntu oracular/main armhf python3-soupsieve all 2.5-1 [33.0 kB] 443s Get:77 http://ftpmaster.internal/ubuntu oracular/main armhf python3-bs4 all 4.12.3-1 [109 kB] 443s Get:78 http://ftpmaster.internal/ubuntu oracular/main armhf python3-defusedxml all 0.7.1-2 [42.0 kB] 443s Get:79 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-jupyterlab-pygments all 0.2.2-3 [6054 B] 443s Get:80 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-mistune all 3.0.2-1 [32.8 kB] 443s Get:81 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-fastjsonschema all 2.19.1-1 [19.7 kB] 443s Get:82 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-nbformat all 5.9.1-1 [41.2 kB] 443s Get:83 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-nbclient all 0.8.0-1 [55.6 kB] 443s Get:84 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-pandocfilters all 1.5.1-1 [23.6 kB] 443s Get:85 http://ftpmaster.internal/ubuntu oracular/universe armhf python-tinycss2-common all 1.3.0-1 [34.1 kB] 443s Get:86 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-tinycss2 all 1.3.0-1 [19.6 kB] 443s Get:87 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-nbconvert all 7.16.4-1 [156 kB] 443s Get:88 http://ftpmaster.internal/ubuntu oracular/main armhf python3-prometheus-client all 0.19.0+ds1-1 [41.7 kB] 443s Get:89 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-send2trash all 1.8.2-1 [15.5 kB] 443s Get:90 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-notebook all 6.4.12-2.2ubuntu1 [1566 kB] 443s Get:91 http://ftpmaster.internal/ubuntu oracular/universe armhf jupyter-notebook all 6.4.12-2.2ubuntu1 [10.4 kB] 443s Get:92 http://ftpmaster.internal/ubuntu oracular/main armhf libjs-sphinxdoc all 7.2.6-8 [150 kB] 443s Get:93 http://ftpmaster.internal/ubuntu oracular/main armhf sphinx-rtd-theme-common all 2.0.0+dfsg-1 [1012 kB] 443s Get:94 http://ftpmaster.internal/ubuntu oracular/universe armhf python-notebook-doc all 6.4.12-2.2ubuntu1 [2540 kB] 443s Preconfiguring packages ... 444s Fetched 39.0 MB in 2s (18.2 MB/s) 444s Selecting previously unselected package fonts-lato. 444s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58402 files and directories currently installed.) 444s Preparing to unpack .../00-fonts-lato_2.015-1_all.deb ... 444s Unpacking fonts-lato (2.015-1) ... 444s Selecting previously unselected package libdebuginfod-common. 444s Preparing to unpack .../01-libdebuginfod-common_0.191-1_all.deb ... 444s Unpacking libdebuginfod-common (0.191-1) ... 444s Selecting previously unselected package fonts-font-awesome. 444s Preparing to unpack .../02-fonts-font-awesome_5.0.10+really4.7.0~dfsg-4.1_all.deb ... 444s Unpacking fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 444s Selecting previously unselected package fonts-glyphicons-halflings. 444s Preparing to unpack .../03-fonts-glyphicons-halflings_1.009~3.4.1+dfsg-3_all.deb ... 444s Unpacking fonts-glyphicons-halflings (1.009~3.4.1+dfsg-3) ... 444s Selecting previously unselected package fonts-mathjax. 444s Preparing to unpack .../04-fonts-mathjax_2.7.9+dfsg-1_all.deb ... 444s Unpacking fonts-mathjax (2.7.9+dfsg-1) ... 444s Selecting previously unselected package libdw1t64:armhf. 444s Preparing to unpack .../05-libdw1t64_0.191-1_armhf.deb ... 444s Unpacking libdw1t64:armhf (0.191-1) ... 444s Selecting previously unselected package libbabeltrace1:armhf. 444s Preparing to unpack .../06-libbabeltrace1_1.5.11-3build3_armhf.deb ... 444s Unpacking libbabeltrace1:armhf (1.5.11-3build3) ... 444s Selecting previously unselected package libdebuginfod1t64:armhf. 444s Preparing to unpack .../07-libdebuginfod1t64_0.191-1_armhf.deb ... 444s Unpacking libdebuginfod1t64:armhf (0.191-1) ... 444s Selecting previously unselected package libpython3.12t64:armhf. 444s Preparing to unpack .../08-libpython3.12t64_3.12.4-1_armhf.deb ... 444s Unpacking libpython3.12t64:armhf (3.12.4-1) ... 445s Selecting previously unselected package libsource-highlight-common. 445s Preparing to unpack .../09-libsource-highlight-common_3.1.9-4.3build1_all.deb ... 445s Unpacking libsource-highlight-common (3.1.9-4.3build1) ... 445s Selecting previously unselected package libsource-highlight4t64:armhf. 445s Preparing to unpack .../10-libsource-highlight4t64_3.1.9-4.3build1_armhf.deb ... 445s Unpacking libsource-highlight4t64:armhf (3.1.9-4.3build1) ... 445s Selecting previously unselected package libc6-dbg:armhf. 445s Preparing to unpack .../11-libc6-dbg_2.39-0ubuntu9_armhf.deb ... 445s Unpacking libc6-dbg:armhf (2.39-0ubuntu9) ... 445s Selecting previously unselected package gdb. 445s Preparing to unpack .../12-gdb_15.0.50.20240403-0ubuntu1_armhf.deb ... 445s Unpacking gdb (15.0.50.20240403-0ubuntu1) ... 445s Selecting previously unselected package python3-platformdirs. 445s Preparing to unpack .../13-python3-platformdirs_4.2.1-1_all.deb ... 445s Unpacking python3-platformdirs (4.2.1-1) ... 445s Selecting previously unselected package python3-traitlets. 445s Preparing to unpack .../14-python3-traitlets_5.14.3-1_all.deb ... 445s Unpacking python3-traitlets (5.14.3-1) ... 445s Selecting previously unselected package python3-jupyter-core. 445s Preparing to unpack .../15-python3-jupyter-core_5.3.2-2_all.deb ... 445s Unpacking python3-jupyter-core (5.3.2-2) ... 445s Selecting previously unselected package jupyter-core. 445s Preparing to unpack .../16-jupyter-core_5.3.2-2_all.deb ... 445s Unpacking jupyter-core (5.3.2-2) ... 445s Selecting previously unselected package libjs-underscore. 445s Preparing to unpack .../17-libjs-underscore_1.13.4~dfsg+~1.11.4-3_all.deb ... 445s Unpacking libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 445s Selecting previously unselected package libjs-backbone. 445s Preparing to unpack .../18-libjs-backbone_1.4.1~dfsg+~1.4.15-3_all.deb ... 445s Unpacking libjs-backbone (1.4.1~dfsg+~1.4.15-3) ... 445s Selecting previously unselected package libjs-bootstrap. 445s Preparing to unpack .../19-libjs-bootstrap_3.4.1+dfsg-3_all.deb ... 445s Unpacking libjs-bootstrap (3.4.1+dfsg-3) ... 445s Selecting previously unselected package libjs-jquery. 445s Preparing to unpack .../20-libjs-jquery_3.6.1+dfsg+~3.5.14-1_all.deb ... 445s Unpacking libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 445s Selecting previously unselected package libjs-bootstrap-tour. 445s Preparing to unpack .../21-libjs-bootstrap-tour_0.12.0+dfsg-5_all.deb ... 445s Unpacking libjs-bootstrap-tour (0.12.0+dfsg-5) ... 445s Selecting previously unselected package libjs-codemirror. 445s Preparing to unpack .../22-libjs-codemirror_5.65.0+~cs5.83.9-3_all.deb ... 445s Unpacking libjs-codemirror (5.65.0+~cs5.83.9-3) ... 445s Selecting previously unselected package libjs-es6-promise. 445s Preparing to unpack .../23-libjs-es6-promise_4.2.8-12_all.deb ... 445s Unpacking libjs-es6-promise (4.2.8-12) ... 445s Selecting previously unselected package node-jed. 445s Preparing to unpack .../24-node-jed_1.1.1-4_all.deb ... 445s Unpacking node-jed (1.1.1-4) ... 445s Selecting previously unselected package libjs-jed. 445s Preparing to unpack .../25-libjs-jed_1.1.1-4_all.deb ... 445s Unpacking libjs-jed (1.1.1-4) ... 445s Selecting previously unselected package libjs-jquery-typeahead. 446s Preparing to unpack .../26-libjs-jquery-typeahead_2.11.0+dfsg1-3_all.deb ... 446s Unpacking libjs-jquery-typeahead (2.11.0+dfsg1-3) ... 446s Selecting previously unselected package libjs-jquery-ui. 446s Preparing to unpack .../27-libjs-jquery-ui_1.13.2+dfsg-1_all.deb ... 446s Unpacking libjs-jquery-ui (1.13.2+dfsg-1) ... 446s Selecting previously unselected package libjs-marked. 446s Preparing to unpack .../28-libjs-marked_4.2.3+ds+~4.0.7-3_all.deb ... 446s Unpacking libjs-marked (4.2.3+ds+~4.0.7-3) ... 446s Selecting previously unselected package libjs-mathjax. 446s Preparing to unpack .../29-libjs-mathjax_2.7.9+dfsg-1_all.deb ... 446s Unpacking libjs-mathjax (2.7.9+dfsg-1) ... 446s Selecting previously unselected package libjs-moment. 446s Preparing to unpack .../30-libjs-moment_2.29.4+ds-1_all.deb ... 446s Unpacking libjs-moment (2.29.4+ds-1) ... 446s Selecting previously unselected package libjs-requirejs. 446s Preparing to unpack .../31-libjs-requirejs_2.3.6+ds+~2.1.37-1_all.deb ... 446s Unpacking libjs-requirejs (2.3.6+ds+~2.1.37-1) ... 446s Selecting previously unselected package libjs-requirejs-text. 446s Preparing to unpack .../32-libjs-requirejs-text_2.0.12-1.1_all.deb ... 446s Unpacking libjs-requirejs-text (2.0.12-1.1) ... 446s Selecting previously unselected package libjs-text-encoding. 446s Preparing to unpack .../33-libjs-text-encoding_0.7.0-5_all.deb ... 446s Unpacking libjs-text-encoding (0.7.0-5) ... 447s Selecting previously unselected package libjs-xterm. 447s Preparing to unpack .../34-libjs-xterm_5.3.0-2_all.deb ... 447s Unpacking libjs-xterm (5.3.0-2) ... 447s Selecting previously unselected package python3-ptyprocess. 447s Preparing to unpack .../35-python3-ptyprocess_0.7.0-5_all.deb ... 447s Unpacking python3-ptyprocess (0.7.0-5) ... 447s Selecting previously unselected package python3-tornado. 447s Preparing to unpack .../36-python3-tornado_6.4.1-1_armhf.deb ... 447s Unpacking python3-tornado (6.4.1-1) ... 447s Selecting previously unselected package python3-terminado. 447s Preparing to unpack .../37-python3-terminado_0.18.1-1_all.deb ... 447s Unpacking python3-terminado (0.18.1-1) ... 447s Selecting previously unselected package python3-argon2. 447s Preparing to unpack .../38-python3-argon2_21.1.0-2build1_armhf.deb ... 447s Unpacking python3-argon2 (21.1.0-2build1) ... 447s Selecting previously unselected package python3-comm. 447s Preparing to unpack .../39-python3-comm_0.2.1-1_all.deb ... 447s Unpacking python3-comm (0.2.1-1) ... 447s Selecting previously unselected package python3-bytecode. 447s Preparing to unpack .../40-python3-bytecode_0.15.1-3_all.deb ... 447s Unpacking python3-bytecode (0.15.1-3) ... 447s Selecting previously unselected package python3-coverage. 447s Preparing to unpack .../41-python3-coverage_7.4.4+dfsg1-0ubuntu2_armhf.deb ... 447s Unpacking python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 447s Selecting previously unselected package python3-pydevd. 447s Preparing to unpack .../42-python3-pydevd_2.10.0+ds-10ubuntu1_armhf.deb ... 447s Unpacking python3-pydevd (2.10.0+ds-10ubuntu1) ... 447s Selecting previously unselected package python3-debugpy. 447s Preparing to unpack .../43-python3-debugpy_1.8.0+ds-4ubuntu4_all.deb ... 447s Unpacking python3-debugpy (1.8.0+ds-4ubuntu4) ... 447s Selecting previously unselected package python3-decorator. 447s Preparing to unpack .../44-python3-decorator_5.1.1-5_all.deb ... 447s Unpacking python3-decorator (5.1.1-5) ... 447s Selecting previously unselected package python3-parso. 447s Preparing to unpack .../45-python3-parso_0.8.3-1_all.deb ... 447s Unpacking python3-parso (0.8.3-1) ... 447s Selecting previously unselected package python3-typeshed. 447s Preparing to unpack .../46-python3-typeshed_0.0~git20231111.6764465-3_all.deb ... 447s Unpacking python3-typeshed (0.0~git20231111.6764465-3) ... 448s Selecting previously unselected package python3-jedi. 448s Preparing to unpack .../47-python3-jedi_0.19.1+ds1-1_all.deb ... 448s Unpacking python3-jedi (0.19.1+ds1-1) ... 448s Selecting previously unselected package python3-matplotlib-inline. 448s Preparing to unpack .../48-python3-matplotlib-inline_0.1.6-2_all.deb ... 448s Unpacking python3-matplotlib-inline (0.1.6-2) ... 448s Selecting previously unselected package python3-pexpect. 448s Preparing to unpack .../49-python3-pexpect_4.9-2_all.deb ... 448s Unpacking python3-pexpect (4.9-2) ... 448s Selecting previously unselected package python3-wcwidth. 448s Preparing to unpack .../50-python3-wcwidth_0.2.5+dfsg1-1.1ubuntu1_all.deb ... 448s Unpacking python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 448s Selecting previously unselected package python3-prompt-toolkit. 448s Preparing to unpack .../51-python3-prompt-toolkit_3.0.46-1_all.deb ... 448s Unpacking python3-prompt-toolkit (3.0.46-1) ... 448s Selecting previously unselected package python3-asttokens. 448s Preparing to unpack .../52-python3-asttokens_2.4.1-1_all.deb ... 448s Unpacking python3-asttokens (2.4.1-1) ... 448s Selecting previously unselected package python3-executing. 448s Preparing to unpack .../53-python3-executing_2.0.1-0.1_all.deb ... 448s Unpacking python3-executing (2.0.1-0.1) ... 448s Selecting previously unselected package python3-pure-eval. 448s Preparing to unpack .../54-python3-pure-eval_0.2.2-2_all.deb ... 448s Unpacking python3-pure-eval (0.2.2-2) ... 448s Selecting previously unselected package python3-stack-data. 448s Preparing to unpack .../55-python3-stack-data_0.6.3-1_all.deb ... 448s Unpacking python3-stack-data (0.6.3-1) ... 449s Selecting previously unselected package python3-ipython. 449s Preparing to unpack .../56-python3-ipython_8.20.0-1ubuntu1_all.deb ... 449s Unpacking python3-ipython (8.20.0-1ubuntu1) ... 449s Selecting previously unselected package python3-dateutil. 449s Preparing to unpack .../57-python3-dateutil_2.9.0-2_all.deb ... 449s Unpacking python3-dateutil (2.9.0-2) ... 449s Selecting previously unselected package python3-entrypoints. 449s Preparing to unpack .../58-python3-entrypoints_0.4-2_all.deb ... 449s Unpacking python3-entrypoints (0.4-2) ... 449s Selecting previously unselected package python3-nest-asyncio. 449s Preparing to unpack .../59-python3-nest-asyncio_1.5.4-1_all.deb ... 449s Unpacking python3-nest-asyncio (1.5.4-1) ... 449s Selecting previously unselected package python3-py. 449s Preparing to unpack .../60-python3-py_1.11.0-2_all.deb ... 449s Unpacking python3-py (1.11.0-2) ... 449s Selecting previously unselected package libnorm1t64:armhf. 449s Preparing to unpack .../61-libnorm1t64_1.5.9+dfsg-3.1build1_armhf.deb ... 449s Unpacking libnorm1t64:armhf (1.5.9+dfsg-3.1build1) ... 449s Selecting previously unselected package libpgm-5.3-0t64:armhf. 449s Preparing to unpack .../62-libpgm-5.3-0t64_5.3.128~dfsg-2.1build1_armhf.deb ... 449s Unpacking libpgm-5.3-0t64:armhf (5.3.128~dfsg-2.1build1) ... 449s Selecting previously unselected package libsodium23:armhf. 449s Preparing to unpack .../63-libsodium23_1.0.18-1build3_armhf.deb ... 449s Unpacking libsodium23:armhf (1.0.18-1build3) ... 449s Selecting previously unselected package libzmq5:armhf. 449s Preparing to unpack .../64-libzmq5_4.3.5-1build2_armhf.deb ... 449s Unpacking libzmq5:armhf (4.3.5-1build2) ... 449s Selecting previously unselected package python3-zmq. 449s Preparing to unpack .../65-python3-zmq_24.0.1-5build1_armhf.deb ... 449s Unpacking python3-zmq (24.0.1-5build1) ... 449s Selecting previously unselected package python3-jupyter-client. 449s Preparing to unpack .../66-python3-jupyter-client_7.4.9-2ubuntu1_all.deb ... 449s Unpacking python3-jupyter-client (7.4.9-2ubuntu1) ... 449s Selecting previously unselected package python3-packaging. 449s Preparing to unpack .../67-python3-packaging_24.0-1_all.deb ... 449s Unpacking python3-packaging (24.0-1) ... 449s Selecting previously unselected package python3-psutil. 449s Preparing to unpack .../68-python3-psutil_5.9.8-2build2_armhf.deb ... 449s Unpacking python3-psutil (5.9.8-2build2) ... 449s Selecting previously unselected package python3-ipykernel. 449s Preparing to unpack .../69-python3-ipykernel_6.29.3-1ubuntu1_all.deb ... 449s Unpacking python3-ipykernel (6.29.3-1ubuntu1) ... 449s Selecting previously unselected package python3-ipython-genutils. 449s Preparing to unpack .../70-python3-ipython-genutils_0.2.0-6_all.deb ... 449s Unpacking python3-ipython-genutils (0.2.0-6) ... 449s Selecting previously unselected package python3-webencodings. 449s Preparing to unpack .../71-python3-webencodings_0.5.1-5_all.deb ... 449s Unpacking python3-webencodings (0.5.1-5) ... 449s Selecting previously unselected package python3-html5lib. 449s Preparing to unpack .../72-python3-html5lib_1.1-6_all.deb ... 449s Unpacking python3-html5lib (1.1-6) ... 449s Selecting previously unselected package python3-bleach. 450s Preparing to unpack .../73-python3-bleach_6.1.0-2_all.deb ... 450s Unpacking python3-bleach (6.1.0-2) ... 450s Selecting previously unselected package python3-soupsieve. 450s Preparing to unpack .../74-python3-soupsieve_2.5-1_all.deb ... 450s Unpacking python3-soupsieve (2.5-1) ... 450s Selecting previously unselected package python3-bs4. 450s Preparing to unpack .../75-python3-bs4_4.12.3-1_all.deb ... 450s Unpacking python3-bs4 (4.12.3-1) ... 450s Selecting previously unselected package python3-defusedxml. 450s Preparing to unpack .../76-python3-defusedxml_0.7.1-2_all.deb ... 450s Unpacking python3-defusedxml (0.7.1-2) ... 450s Selecting previously unselected package python3-jupyterlab-pygments. 450s Preparing to unpack .../77-python3-jupyterlab-pygments_0.2.2-3_all.deb ... 450s Unpacking python3-jupyterlab-pygments (0.2.2-3) ... 450s Selecting previously unselected package python3-mistune. 450s Preparing to unpack .../78-python3-mistune_3.0.2-1_all.deb ... 450s Unpacking python3-mistune (3.0.2-1) ... 450s Selecting previously unselected package python3-fastjsonschema. 450s Preparing to unpack .../79-python3-fastjsonschema_2.19.1-1_all.deb ... 450s Unpacking python3-fastjsonschema (2.19.1-1) ... 450s Selecting previously unselected package python3-nbformat. 450s Preparing to unpack .../80-python3-nbformat_5.9.1-1_all.deb ... 450s Unpacking python3-nbformat (5.9.1-1) ... 450s Selecting previously unselected package python3-nbclient. 450s Preparing to unpack .../81-python3-nbclient_0.8.0-1_all.deb ... 450s Unpacking python3-nbclient (0.8.0-1) ... 450s Selecting previously unselected package python3-pandocfilters. 450s Preparing to unpack .../82-python3-pandocfilters_1.5.1-1_all.deb ... 450s Unpacking python3-pandocfilters (1.5.1-1) ... 450s Selecting previously unselected package python-tinycss2-common. 450s Preparing to unpack .../83-python-tinycss2-common_1.3.0-1_all.deb ... 450s Unpacking python-tinycss2-common (1.3.0-1) ... 450s Selecting previously unselected package python3-tinycss2. 450s Preparing to unpack .../84-python3-tinycss2_1.3.0-1_all.deb ... 450s Unpacking python3-tinycss2 (1.3.0-1) ... 450s Selecting previously unselected package python3-nbconvert. 450s Preparing to unpack .../85-python3-nbconvert_7.16.4-1_all.deb ... 450s Unpacking python3-nbconvert (7.16.4-1) ... 450s Selecting previously unselected package python3-prometheus-client. 450s Preparing to unpack .../86-python3-prometheus-client_0.19.0+ds1-1_all.deb ... 450s Unpacking python3-prometheus-client (0.19.0+ds1-1) ... 450s Selecting previously unselected package python3-send2trash. 450s Preparing to unpack .../87-python3-send2trash_1.8.2-1_all.deb ... 450s Unpacking python3-send2trash (1.8.2-1) ... 450s Selecting previously unselected package python3-notebook. 450s Preparing to unpack .../88-python3-notebook_6.4.12-2.2ubuntu1_all.deb ... 450s Unpacking python3-notebook (6.4.12-2.2ubuntu1) ... 450s Selecting previously unselected package jupyter-notebook. 450s Preparing to unpack .../89-jupyter-notebook_6.4.12-2.2ubuntu1_all.deb ... 450s Unpacking jupyter-notebook (6.4.12-2.2ubuntu1) ... 450s Selecting previously unselected package libjs-sphinxdoc. 450s Preparing to unpack .../90-libjs-sphinxdoc_7.2.6-8_all.deb ... 450s Unpacking libjs-sphinxdoc (7.2.6-8) ... 451s Selecting previously unselected package sphinx-rtd-theme-common. 451s Preparing to unpack .../91-sphinx-rtd-theme-common_2.0.0+dfsg-1_all.deb ... 451s Unpacking sphinx-rtd-theme-common (2.0.0+dfsg-1) ... 451s Selecting previously unselected package python-notebook-doc. 451s Preparing to unpack .../92-python-notebook-doc_6.4.12-2.2ubuntu1_all.deb ... 451s Unpacking python-notebook-doc (6.4.12-2.2ubuntu1) ... 451s Selecting previously unselected package autopkgtest-satdep. 451s Preparing to unpack .../93-2-autopkgtest-satdep.deb ... 451s Unpacking autopkgtest-satdep (0) ... 451s Setting up python3-entrypoints (0.4-2) ... 451s Setting up libjs-jquery-typeahead (2.11.0+dfsg1-3) ... 451s Setting up python3-tornado (6.4.1-1) ... 451s Setting up libnorm1t64:armhf (1.5.9+dfsg-3.1build1) ... 451s Setting up python3-pure-eval (0.2.2-2) ... 451s Setting up python3-send2trash (1.8.2-1) ... 452s Setting up fonts-lato (2.015-1) ... 452s Setting up fonts-mathjax (2.7.9+dfsg-1) ... 452s Setting up libsodium23:armhf (1.0.18-1build3) ... 452s Setting up libjs-mathjax (2.7.9+dfsg-1) ... 452s Setting up python3-py (1.11.0-2) ... 452s Setting up libdebuginfod-common (0.191-1) ... 452s Setting up libjs-requirejs-text (2.0.12-1.1) ... 452s Setting up python3-parso (0.8.3-1) ... 452s Setting up python3-defusedxml (0.7.1-2) ... 452s Setting up python3-ipython-genutils (0.2.0-6) ... 452s Setting up python3-asttokens (2.4.1-1) ... 453s Setting up fonts-glyphicons-halflings (1.009~3.4.1+dfsg-3) ... 453s Setting up python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 453s Setting up libjs-moment (2.29.4+ds-1) ... 453s Setting up python3-pandocfilters (1.5.1-1) ... 453s Setting up libjs-requirejs (2.3.6+ds+~2.1.37-1) ... 453s Setting up libjs-es6-promise (4.2.8-12) ... 453s Setting up libjs-text-encoding (0.7.0-5) ... 453s Setting up python3-webencodings (0.5.1-5) ... 453s Setting up python3-platformdirs (4.2.1-1) ... 453s Setting up python3-psutil (5.9.8-2build2) ... 454s Setting up libsource-highlight-common (3.1.9-4.3build1) ... 454s Setting up libc6-dbg:armhf (2.39-0ubuntu9) ... 454s Setting up libdw1t64:armhf (0.191-1) ... 454s Setting up python3-jupyterlab-pygments (0.2.2-3) ... 454s Setting up libpython3.12t64:armhf (3.12.4-1) ... 454s Setting up libpgm-5.3-0t64:armhf (5.3.128~dfsg-2.1build1) ... 454s Setting up python3-decorator (5.1.1-5) ... 454s Setting up python3-packaging (24.0-1) ... 454s Setting up python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 454s Setting up node-jed (1.1.1-4) ... 454s Setting up python3-typeshed (0.0~git20231111.6764465-3) ... 454s Setting up python3-executing (2.0.1-0.1) ... 454s Setting up libjs-xterm (5.3.0-2) ... 454s Setting up python3-nest-asyncio (1.5.4-1) ... 455s Setting up python3-bytecode (0.15.1-3) ... 455s Setting up libjs-codemirror (5.65.0+~cs5.83.9-3) ... 455s Setting up libjs-jed (1.1.1-4) ... 455s Setting up python3-html5lib (1.1-6) ... 455s Setting up libbabeltrace1:armhf (1.5.11-3build3) ... 455s Setting up python3-fastjsonschema (2.19.1-1) ... 455s Setting up python3-traitlets (5.14.3-1) ... 455s Setting up python-tinycss2-common (1.3.0-1) ... 455s Setting up python3-argon2 (21.1.0-2build1) ... 456s Setting up python3-dateutil (2.9.0-2) ... 456s Setting up libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 456s Setting up python3-mistune (3.0.2-1) ... 456s Setting up python3-stack-data (0.6.3-1) ... 456s Setting up python3-soupsieve (2.5-1) ... 456s Setting up fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 456s Setting up sphinx-rtd-theme-common (2.0.0+dfsg-1) ... 456s Setting up python3-jupyter-core (5.3.2-2) ... 456s Setting up libjs-bootstrap (3.4.1+dfsg-3) ... 456s Setting up libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 456s Setting up python3-ptyprocess (0.7.0-5) ... 457s Setting up libjs-marked (4.2.3+ds+~4.0.7-3) ... 457s Setting up python3-prompt-toolkit (3.0.46-1) ... 457s Setting up libdebuginfod1t64:armhf (0.191-1) ... 457s Setting up python3-tinycss2 (1.3.0-1) ... 457s Setting up libzmq5:armhf (4.3.5-1build2) ... 457s Setting up python3-jedi (0.19.1+ds1-1) ... 458s Setting up libjs-bootstrap-tour (0.12.0+dfsg-5) ... 458s Setting up libjs-backbone (1.4.1~dfsg+~1.4.15-3) ... 458s Setting up libsource-highlight4t64:armhf (3.1.9-4.3build1) ... 458s Setting up python3-nbformat (5.9.1-1) ... 458s Setting up python3-bs4 (4.12.3-1) ... 458s Setting up python3-bleach (6.1.0-2) ... 458s Setting up python3-matplotlib-inline (0.1.6-2) ... 458s Setting up python3-comm (0.2.1-1) ... 459s Setting up python3-prometheus-client (0.19.0+ds1-1) ... 459s Setting up gdb (15.0.50.20240403-0ubuntu1) ... 459s Setting up libjs-jquery-ui (1.13.2+dfsg-1) ... 459s Setting up python3-pexpect (4.9-2) ... 459s Setting up python3-zmq (24.0.1-5build1) ... 459s Setting up libjs-sphinxdoc (7.2.6-8) ... 459s Setting up python3-terminado (0.18.1-1) ... 460s Setting up python3-jupyter-client (7.4.9-2ubuntu1) ... 460s Setting up jupyter-core (5.3.2-2) ... 460s Setting up python3-pydevd (2.10.0+ds-10ubuntu1) ... 460s Setting up python3-debugpy (1.8.0+ds-4ubuntu4) ... 461s Setting up python-notebook-doc (6.4.12-2.2ubuntu1) ... 461s Setting up python3-nbclient (0.8.0-1) ... 461s Setting up python3-ipython (8.20.0-1ubuntu1) ... 462s Setting up python3-ipykernel (6.29.3-1ubuntu1) ... 462s Setting up python3-nbconvert (7.16.4-1) ... 462s Setting up python3-notebook (6.4.12-2.2ubuntu1) ... 463s Setting up jupyter-notebook (6.4.12-2.2ubuntu1) ... 463s Setting up autopkgtest-satdep (0) ... 463s Processing triggers for man-db (2.12.1-2) ... 463s Processing triggers for libc-bin (2.39-0ubuntu9) ... 487s (Reading database ... 75314 files and directories currently installed.) 487s Removing autopkgtest-satdep (0) ... 498s autopkgtest [10:35:37]: test command1: find /usr/lib/python3/dist-packages/notebook -xtype l >&2 498s autopkgtest [10:35:37]: test command1: [----------------------- 501s autopkgtest [10:35:40]: test command1: -----------------------] 505s command1 PASS (superficial) 505s autopkgtest [10:35:44]: test command1: - - - - - - - - - - results - - - - - - - - - - 509s autopkgtest [10:35:48]: test autodep8-python3: preparing testbed 536s autopkgtest [10:36:15]: testbed dpkg architecture: armhf 538s autopkgtest [10:36:17]: testbed apt version: 2.9.5 538s autopkgtest [10:36:17]: @@@@@@@@@@@@@@@@@@@@ test bed setup 545s Get:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease [110 kB] 546s Get:2 http://ftpmaster.internal/ubuntu oracular-proposed/main Sources [36.1 kB] 546s Get:3 http://ftpmaster.internal/ubuntu oracular-proposed/universe Sources [389 kB] 546s Get:4 http://ftpmaster.internal/ubuntu oracular-proposed/restricted Sources [7052 B] 546s Get:5 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse Sources [2576 B] 546s Get:6 http://ftpmaster.internal/ubuntu oracular-proposed/main armhf Packages [34.8 kB] 546s Get:7 http://ftpmaster.internal/ubuntu oracular-proposed/restricted armhf Packages [1860 B] 546s Get:8 http://ftpmaster.internal/ubuntu oracular-proposed/universe armhf Packages [293 kB] 546s Get:9 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse armhf Packages [2528 B] 546s Fetched 877 kB in 1s (1012 kB/s) 546s Reading package lists... 561s tee: /proc/self/fd/2: Permission denied 582s Hit:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease 582s Hit:2 http://ftpmaster.internal/ubuntu oracular InRelease 582s Hit:3 http://ftpmaster.internal/ubuntu oracular-updates InRelease 582s Hit:4 http://ftpmaster.internal/ubuntu oracular-security InRelease 583s Reading package lists... 584s Reading package lists... 584s Building dependency tree... 584s Reading state information... 585s Calculating upgrade... 586s The following packages will be upgraded: 586s libldap-common libldap2 586s 2 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 586s Need to get 203 kB of archives. 586s After this operation, 0 B of additional disk space will be used. 586s Get:1 http://ftpmaster.internal/ubuntu oracular/main armhf libldap-common all 2.6.7+dfsg-1~exp1ubuntu9 [31.5 kB] 586s Get:2 http://ftpmaster.internal/ubuntu oracular/main armhf libldap2 armhf 2.6.7+dfsg-1~exp1ubuntu9 [171 kB] 587s Fetched 203 kB in 0s (485 kB/s) 587s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58402 files and directories currently installed.) 587s Preparing to unpack .../libldap-common_2.6.7+dfsg-1~exp1ubuntu9_all.deb ... 587s Unpacking libldap-common (2.6.7+dfsg-1~exp1ubuntu9) over (2.6.7+dfsg-1~exp1ubuntu8) ... 587s Preparing to unpack .../libldap2_2.6.7+dfsg-1~exp1ubuntu9_armhf.deb ... 587s Unpacking libldap2:armhf (2.6.7+dfsg-1~exp1ubuntu9) over (2.6.7+dfsg-1~exp1ubuntu8) ... 587s Setting up libldap-common (2.6.7+dfsg-1~exp1ubuntu9) ... 587s Setting up libldap2:armhf (2.6.7+dfsg-1~exp1ubuntu9) ... 587s Processing triggers for man-db (2.12.1-2) ... 587s Processing triggers for libc-bin (2.39-0ubuntu9) ... 588s Reading package lists... 588s Building dependency tree... 588s Reading state information... 589s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 590s autopkgtest [10:37:09]: rebooting testbed after setup commands that affected boot 664s Reading package lists... 664s Building dependency tree... 664s Reading state information... 665s Starting pkgProblemResolver with broken count: 0 665s Starting 2 pkgProblemResolver with broken count: 0 665s Done 666s The following additional packages will be installed: 666s fonts-font-awesome fonts-glyphicons-halflings fonts-mathjax gdb 666s libbabeltrace1 libc6-dbg libdebuginfod-common libdebuginfod1t64 libdw1t64 666s libjs-backbone libjs-bootstrap libjs-bootstrap-tour libjs-codemirror 666s libjs-es6-promise libjs-jed libjs-jquery libjs-jquery-typeahead 666s libjs-jquery-ui libjs-marked libjs-mathjax libjs-moment libjs-requirejs 666s libjs-requirejs-text libjs-text-encoding libjs-underscore libjs-xterm 666s libnorm1t64 libpgm-5.3-0t64 libpython3.12t64 libsodium23 666s libsource-highlight-common libsource-highlight4t64 libzmq5 node-jed 666s python-tinycss2-common python3-all python3-argon2 python3-asttokens 666s python3-bleach python3-bs4 python3-bytecode python3-comm python3-coverage 666s python3-dateutil python3-debugpy python3-decorator python3-defusedxml 666s python3-entrypoints python3-executing python3-fastjsonschema 666s python3-html5lib python3-ipykernel python3-ipython python3-ipython-genutils 666s python3-jedi python3-jupyter-client python3-jupyter-core 666s python3-jupyterlab-pygments python3-matplotlib-inline python3-mistune 666s python3-nbclient python3-nbconvert python3-nbformat python3-nest-asyncio 666s python3-notebook python3-packaging python3-pandocfilters python3-parso 666s python3-pexpect python3-platformdirs python3-prometheus-client 666s python3-prompt-toolkit python3-psutil python3-ptyprocess python3-pure-eval 666s python3-py python3-pydevd python3-send2trash python3-soupsieve 666s python3-stack-data python3-terminado python3-tinycss2 python3-tornado 666s python3-traitlets python3-typeshed python3-wcwidth python3-webencodings 666s python3-zmq 666s Suggested packages: 666s gdb-doc gdbserver libjs-jquery-lazyload libjs-json libjs-jquery-ui-docs 666s fonts-mathjax-extras fonts-stix libjs-mathjax-doc python-argon2-doc 666s python-bleach-doc python-bytecode-doc python-coverage-doc 666s python-fastjsonschema-doc python3-genshi python3-lxml python-ipython-doc 666s python3-pip python-nbconvert-doc texlive-fonts-recommended 666s texlive-plain-generic texlive-xetex python-notebook-doc python-pexpect-doc 666s subversion python3-pytest pydevd python-terminado-doc python-tinycss2-doc 666s python3-pycurl python-tornado-doc python3-twisted 666s Recommended packages: 666s javascript-common python3-lxml python3-matplotlib pandoc python3-ipywidgets 666s The following NEW packages will be installed: 666s autopkgtest-satdep fonts-font-awesome fonts-glyphicons-halflings 666s fonts-mathjax gdb libbabeltrace1 libc6-dbg libdebuginfod-common 666s libdebuginfod1t64 libdw1t64 libjs-backbone libjs-bootstrap 666s libjs-bootstrap-tour libjs-codemirror libjs-es6-promise libjs-jed 666s libjs-jquery libjs-jquery-typeahead libjs-jquery-ui libjs-marked 666s libjs-mathjax libjs-moment libjs-requirejs libjs-requirejs-text 666s libjs-text-encoding libjs-underscore libjs-xterm libnorm1t64 libpgm-5.3-0t64 666s libpython3.12t64 libsodium23 libsource-highlight-common 666s libsource-highlight4t64 libzmq5 node-jed python-tinycss2-common python3-all 666s python3-argon2 python3-asttokens python3-bleach python3-bs4 python3-bytecode 666s python3-comm python3-coverage python3-dateutil python3-debugpy 666s python3-decorator python3-defusedxml python3-entrypoints python3-executing 666s python3-fastjsonschema python3-html5lib python3-ipykernel python3-ipython 666s python3-ipython-genutils python3-jedi python3-jupyter-client 666s python3-jupyter-core python3-jupyterlab-pygments python3-matplotlib-inline 666s python3-mistune python3-nbclient python3-nbconvert python3-nbformat 666s python3-nest-asyncio python3-notebook python3-packaging 666s python3-pandocfilters python3-parso python3-pexpect python3-platformdirs 666s python3-prometheus-client python3-prompt-toolkit python3-psutil 666s python3-ptyprocess python3-pure-eval python3-py python3-pydevd 666s python3-send2trash python3-soupsieve python3-stack-data python3-terminado 666s python3-tinycss2 python3-tornado python3-traitlets python3-typeshed 666s python3-wcwidth python3-webencodings python3-zmq 666s 0 upgraded, 89 newly installed, 0 to remove and 0 not upgraded. 666s Need to get 32.5 MB/32.5 MB of archives. 666s After this operation, 152 MB of additional disk space will be used. 666s Get:1 /tmp/autopkgtest.FXI16z/3-autopkgtest-satdep.deb autopkgtest-satdep armhf 0 [712 B] 666s Get:2 http://ftpmaster.internal/ubuntu oracular/main armhf libdebuginfod-common all 0.191-1 [14.6 kB] 666s Get:3 http://ftpmaster.internal/ubuntu oracular/main armhf fonts-font-awesome all 5.0.10+really4.7.0~dfsg-4.1 [516 kB] 666s Get:4 http://ftpmaster.internal/ubuntu oracular/universe armhf fonts-glyphicons-halflings all 1.009~3.4.1+dfsg-3 [118 kB] 666s Get:5 http://ftpmaster.internal/ubuntu oracular/main armhf fonts-mathjax all 2.7.9+dfsg-1 [2208 kB] 666s Get:6 http://ftpmaster.internal/ubuntu oracular/main armhf libdw1t64 armhf 0.191-1 [238 kB] 666s Get:7 http://ftpmaster.internal/ubuntu oracular/main armhf libbabeltrace1 armhf 1.5.11-3build3 [154 kB] 666s Get:8 http://ftpmaster.internal/ubuntu oracular/main armhf libdebuginfod1t64 armhf 0.191-1 [15.8 kB] 666s Get:9 http://ftpmaster.internal/ubuntu oracular/main armhf libpython3.12t64 armhf 3.12.4-1 [2059 kB] 667s Get:10 http://ftpmaster.internal/ubuntu oracular/main armhf libsource-highlight-common all 3.1.9-4.3build1 [64.2 kB] 667s Get:11 http://ftpmaster.internal/ubuntu oracular/main armhf libsource-highlight4t64 armhf 3.1.9-4.3build1 [306 kB] 667s Get:12 http://ftpmaster.internal/ubuntu oracular/main armhf libc6-dbg armhf 2.39-0ubuntu9 [6017 kB] 667s Get:13 http://ftpmaster.internal/ubuntu oracular/main armhf gdb armhf 15.0.50.20240403-0ubuntu1 [3852 kB] 667s Get:14 http://ftpmaster.internal/ubuntu oracular/main armhf libjs-underscore all 1.13.4~dfsg+~1.11.4-3 [118 kB] 667s Get:15 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-backbone all 1.4.1~dfsg+~1.4.15-3 [185 kB] 667s Get:16 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-bootstrap all 3.4.1+dfsg-3 [129 kB] 667s Get:17 http://ftpmaster.internal/ubuntu oracular/main armhf libjs-jquery all 3.6.1+dfsg+~3.5.14-1 [328 kB] 667s Get:18 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-bootstrap-tour all 0.12.0+dfsg-5 [21.4 kB] 667s Get:19 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-es6-promise all 4.2.8-12 [14.1 kB] 667s Get:20 http://ftpmaster.internal/ubuntu oracular/universe armhf node-jed all 1.1.1-4 [15.2 kB] 667s Get:21 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-jed all 1.1.1-4 [2584 B] 667s Get:22 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-jquery-typeahead all 2.11.0+dfsg1-3 [48.9 kB] 667s Get:23 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-jquery-ui all 1.13.2+dfsg-1 [252 kB] 667s Get:24 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-moment all 2.29.4+ds-1 [147 kB] 667s Get:25 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-text-encoding all 0.7.0-5 [140 kB] 667s Get:26 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-xterm all 5.3.0-2 [476 kB] 667s Get:27 http://ftpmaster.internal/ubuntu oracular/universe armhf libnorm1t64 armhf 1.5.9+dfsg-3.1build1 [206 kB] 667s Get:28 http://ftpmaster.internal/ubuntu oracular/universe armhf libpgm-5.3-0t64 armhf 5.3.128~dfsg-2.1build1 [171 kB] 667s Get:29 http://ftpmaster.internal/ubuntu oracular/main armhf libsodium23 armhf 1.0.18-1build3 [139 kB] 667s Get:30 http://ftpmaster.internal/ubuntu oracular/universe armhf libzmq5 armhf 4.3.5-1build2 [262 kB] 667s Get:31 http://ftpmaster.internal/ubuntu oracular/universe armhf python-tinycss2-common all 1.3.0-1 [34.1 kB] 667s Get:32 http://ftpmaster.internal/ubuntu oracular/main armhf python3-all armhf 3.12.3-0ubuntu1 [886 B] 667s Get:33 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-argon2 armhf 21.1.0-2build1 [19.9 kB] 667s Get:34 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-asttokens all 2.4.1-1 [20.9 kB] 667s Get:35 http://ftpmaster.internal/ubuntu oracular/main armhf python3-webencodings all 0.5.1-5 [11.5 kB] 667s Get:36 http://ftpmaster.internal/ubuntu oracular/main armhf python3-html5lib all 1.1-6 [88.8 kB] 667s Get:37 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-bleach all 6.1.0-2 [49.6 kB] 667s Get:38 http://ftpmaster.internal/ubuntu oracular/main armhf python3-soupsieve all 2.5-1 [33.0 kB] 667s Get:39 http://ftpmaster.internal/ubuntu oracular/main armhf python3-bs4 all 4.12.3-1 [109 kB] 667s Get:40 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-bytecode all 0.15.1-3 [44.7 kB] 667s Get:41 http://ftpmaster.internal/ubuntu oracular-proposed/universe armhf python3-traitlets all 5.14.3-1 [71.3 kB] 667s Get:42 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-comm all 0.2.1-1 [7016 B] 667s Get:43 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-coverage armhf 7.4.4+dfsg1-0ubuntu2 [146 kB] 667s Get:44 http://ftpmaster.internal/ubuntu oracular/main armhf python3-dateutil all 2.9.0-2 [80.3 kB] 667s Get:45 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-pydevd armhf 2.10.0+ds-10ubuntu1 [613 kB] 667s Get:46 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-debugpy all 1.8.0+ds-4ubuntu4 [67.6 kB] 667s Get:47 http://ftpmaster.internal/ubuntu oracular/main armhf python3-decorator all 5.1.1-5 [10.1 kB] 667s Get:48 http://ftpmaster.internal/ubuntu oracular/main armhf python3-defusedxml all 0.7.1-2 [42.0 kB] 667s Get:49 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-entrypoints all 0.4-2 [7146 B] 667s Get:50 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-executing all 2.0.1-0.1 [23.3 kB] 667s Get:51 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-fastjsonschema all 2.19.1-1 [19.7 kB] 667s Get:52 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-parso all 0.8.3-1 [67.2 kB] 667s Get:53 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-typeshed all 0.0~git20231111.6764465-3 [1274 kB] 667s Get:54 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-jedi all 0.19.1+ds1-1 [693 kB] 667s Get:55 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-matplotlib-inline all 0.1.6-2 [8784 B] 667s Get:56 http://ftpmaster.internal/ubuntu oracular/main armhf python3-ptyprocess all 0.7.0-5 [15.1 kB] 667s Get:57 http://ftpmaster.internal/ubuntu oracular/main armhf python3-pexpect all 4.9-2 [48.1 kB] 667s Get:58 http://ftpmaster.internal/ubuntu oracular/main armhf python3-wcwidth all 0.2.5+dfsg1-1.1ubuntu1 [22.5 kB] 667s Get:59 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-prompt-toolkit all 3.0.46-1 [256 kB] 667s Get:60 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-pure-eval all 0.2.2-2 [11.1 kB] 667s Get:61 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-stack-data all 0.6.3-1 [22.0 kB] 667s Get:62 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-ipython all 8.20.0-1ubuntu1 [561 kB] 667s Get:63 http://ftpmaster.internal/ubuntu oracular/main armhf python3-platformdirs all 4.2.1-1 [16.3 kB] 667s Get:64 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-jupyter-core all 5.3.2-2 [25.5 kB] 667s Get:65 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-nest-asyncio all 1.5.4-1 [6256 B] 667s Get:66 http://ftpmaster.internal/ubuntu oracular/main armhf python3-tornado armhf 6.4.1-1 [298 kB] 667s Get:67 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-py all 1.11.0-2 [72.7 kB] 667s Get:68 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-zmq armhf 24.0.1-5build1 [275 kB] 667s Get:69 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-jupyter-client all 7.4.9-2ubuntu1 [90.5 kB] 667s Get:70 http://ftpmaster.internal/ubuntu oracular/main armhf python3-packaging all 24.0-1 [41.1 kB] 667s Get:71 http://ftpmaster.internal/ubuntu oracular/main armhf python3-psutil armhf 5.9.8-2build2 [194 kB] 667s Get:72 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-ipykernel all 6.29.3-1ubuntu1 [82.6 kB] 667s Get:73 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-ipython-genutils all 0.2.0-6 [22.0 kB] 667s Get:74 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-jupyterlab-pygments all 0.2.2-3 [6054 B] 667s Get:75 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-mistune all 3.0.2-1 [32.8 kB] 667s Get:76 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-nbformat all 5.9.1-1 [41.2 kB] 667s Get:77 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-nbclient all 0.8.0-1 [55.6 kB] 667s Get:78 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-pandocfilters all 1.5.1-1 [23.6 kB] 667s Get:79 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-tinycss2 all 1.3.0-1 [19.6 kB] 667s Get:80 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-nbconvert all 7.16.4-1 [156 kB] 668s Get:81 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-codemirror all 5.65.0+~cs5.83.9-3 [755 kB] 668s Get:82 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-marked all 4.2.3+ds+~4.0.7-3 [36.2 kB] 668s Get:83 http://ftpmaster.internal/ubuntu oracular/main armhf libjs-mathjax all 2.7.9+dfsg-1 [5665 kB] 668s Get:84 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-requirejs all 2.3.6+ds+~2.1.37-1 [201 kB] 668s Get:85 http://ftpmaster.internal/ubuntu oracular/universe armhf libjs-requirejs-text all 2.0.12-1.1 [9056 B] 668s Get:86 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-terminado all 0.18.1-1 [13.2 kB] 668s Get:87 http://ftpmaster.internal/ubuntu oracular/main armhf python3-prometheus-client all 0.19.0+ds1-1 [41.7 kB] 668s Get:88 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-send2trash all 1.8.2-1 [15.5 kB] 668s Get:89 http://ftpmaster.internal/ubuntu oracular/universe armhf python3-notebook all 6.4.12-2.2ubuntu1 [1566 kB] 668s Preconfiguring packages ... 668s Fetched 32.5 MB in 2s (16.4 MB/s) 668s Selecting previously unselected package libdebuginfod-common. 668s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58402 files and directories currently installed.) 668s Preparing to unpack .../00-libdebuginfod-common_0.191-1_all.deb ... 668s Unpacking libdebuginfod-common (0.191-1) ... 668s Selecting previously unselected package fonts-font-awesome. 668s Preparing to unpack .../01-fonts-font-awesome_5.0.10+really4.7.0~dfsg-4.1_all.deb ... 668s Unpacking fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 668s Selecting previously unselected package fonts-glyphicons-halflings. 668s Preparing to unpack .../02-fonts-glyphicons-halflings_1.009~3.4.1+dfsg-3_all.deb ... 668s Unpacking fonts-glyphicons-halflings (1.009~3.4.1+dfsg-3) ... 668s Selecting previously unselected package fonts-mathjax. 669s Preparing to unpack .../03-fonts-mathjax_2.7.9+dfsg-1_all.deb ... 669s Unpacking fonts-mathjax (2.7.9+dfsg-1) ... 669s Selecting previously unselected package libdw1t64:armhf. 669s Preparing to unpack .../04-libdw1t64_0.191-1_armhf.deb ... 669s Unpacking libdw1t64:armhf (0.191-1) ... 669s Selecting previously unselected package libbabeltrace1:armhf. 669s Preparing to unpack .../05-libbabeltrace1_1.5.11-3build3_armhf.deb ... 669s Unpacking libbabeltrace1:armhf (1.5.11-3build3) ... 669s Selecting previously unselected package libdebuginfod1t64:armhf. 669s Preparing to unpack .../06-libdebuginfod1t64_0.191-1_armhf.deb ... 669s Unpacking libdebuginfod1t64:armhf (0.191-1) ... 669s Selecting previously unselected package libpython3.12t64:armhf. 669s Preparing to unpack .../07-libpython3.12t64_3.12.4-1_armhf.deb ... 669s Unpacking libpython3.12t64:armhf (3.12.4-1) ... 669s Selecting previously unselected package libsource-highlight-common. 669s Preparing to unpack .../08-libsource-highlight-common_3.1.9-4.3build1_all.deb ... 669s Unpacking libsource-highlight-common (3.1.9-4.3build1) ... 669s Selecting previously unselected package libsource-highlight4t64:armhf. 669s Preparing to unpack .../09-libsource-highlight4t64_3.1.9-4.3build1_armhf.deb ... 669s Unpacking libsource-highlight4t64:armhf (3.1.9-4.3build1) ... 669s Selecting previously unselected package libc6-dbg:armhf. 669s Preparing to unpack .../10-libc6-dbg_2.39-0ubuntu9_armhf.deb ... 669s Unpacking libc6-dbg:armhf (2.39-0ubuntu9) ... 669s Selecting previously unselected package gdb. 669s Preparing to unpack .../11-gdb_15.0.50.20240403-0ubuntu1_armhf.deb ... 669s Unpacking gdb (15.0.50.20240403-0ubuntu1) ... 669s Selecting previously unselected package libjs-underscore. 669s Preparing to unpack .../12-libjs-underscore_1.13.4~dfsg+~1.11.4-3_all.deb ... 669s Unpacking libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 669s Selecting previously unselected package libjs-backbone. 669s Preparing to unpack .../13-libjs-backbone_1.4.1~dfsg+~1.4.15-3_all.deb ... 669s Unpacking libjs-backbone (1.4.1~dfsg+~1.4.15-3) ... 669s Selecting previously unselected package libjs-bootstrap. 669s Preparing to unpack .../14-libjs-bootstrap_3.4.1+dfsg-3_all.deb ... 669s Unpacking libjs-bootstrap (3.4.1+dfsg-3) ... 669s Selecting previously unselected package libjs-jquery. 669s Preparing to unpack .../15-libjs-jquery_3.6.1+dfsg+~3.5.14-1_all.deb ... 669s Unpacking libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 669s Selecting previously unselected package libjs-bootstrap-tour. 669s Preparing to unpack .../16-libjs-bootstrap-tour_0.12.0+dfsg-5_all.deb ... 669s Unpacking libjs-bootstrap-tour (0.12.0+dfsg-5) ... 669s Selecting previously unselected package libjs-es6-promise. 670s Preparing to unpack .../17-libjs-es6-promise_4.2.8-12_all.deb ... 670s Unpacking libjs-es6-promise (4.2.8-12) ... 670s Selecting previously unselected package node-jed. 670s Preparing to unpack .../18-node-jed_1.1.1-4_all.deb ... 670s Unpacking node-jed (1.1.1-4) ... 670s Selecting previously unselected package libjs-jed. 670s Preparing to unpack .../19-libjs-jed_1.1.1-4_all.deb ... 670s Unpacking libjs-jed (1.1.1-4) ... 670s Selecting previously unselected package libjs-jquery-typeahead. 670s Preparing to unpack .../20-libjs-jquery-typeahead_2.11.0+dfsg1-3_all.deb ... 670s Unpacking libjs-jquery-typeahead (2.11.0+dfsg1-3) ... 670s Selecting previously unselected package libjs-jquery-ui. 670s Preparing to unpack .../21-libjs-jquery-ui_1.13.2+dfsg-1_all.deb ... 670s Unpacking libjs-jquery-ui (1.13.2+dfsg-1) ... 670s Selecting previously unselected package libjs-moment. 670s Preparing to unpack .../22-libjs-moment_2.29.4+ds-1_all.deb ... 670s Unpacking libjs-moment (2.29.4+ds-1) ... 670s Selecting previously unselected package libjs-text-encoding. 670s Preparing to unpack .../23-libjs-text-encoding_0.7.0-5_all.deb ... 670s Unpacking libjs-text-encoding (0.7.0-5) ... 670s Selecting previously unselected package libjs-xterm. 670s Preparing to unpack .../24-libjs-xterm_5.3.0-2_all.deb ... 670s Unpacking libjs-xterm (5.3.0-2) ... 670s Selecting previously unselected package libnorm1t64:armhf. 670s Preparing to unpack .../25-libnorm1t64_1.5.9+dfsg-3.1build1_armhf.deb ... 670s Unpacking libnorm1t64:armhf (1.5.9+dfsg-3.1build1) ... 670s Selecting previously unselected package libpgm-5.3-0t64:armhf. 670s Preparing to unpack .../26-libpgm-5.3-0t64_5.3.128~dfsg-2.1build1_armhf.deb ... 670s Unpacking libpgm-5.3-0t64:armhf (5.3.128~dfsg-2.1build1) ... 670s Selecting previously unselected package libsodium23:armhf. 670s Preparing to unpack .../27-libsodium23_1.0.18-1build3_armhf.deb ... 670s Unpacking libsodium23:armhf (1.0.18-1build3) ... 670s Selecting previously unselected package libzmq5:armhf. 670s Preparing to unpack .../28-libzmq5_4.3.5-1build2_armhf.deb ... 670s Unpacking libzmq5:armhf (4.3.5-1build2) ... 670s Selecting previously unselected package python-tinycss2-common. 670s Preparing to unpack .../29-python-tinycss2-common_1.3.0-1_all.deb ... 670s Unpacking python-tinycss2-common (1.3.0-1) ... 670s Selecting previously unselected package python3-all. 670s Preparing to unpack .../30-python3-all_3.12.3-0ubuntu1_armhf.deb ... 670s Unpacking python3-all (3.12.3-0ubuntu1) ... 670s Selecting previously unselected package python3-argon2. 670s Preparing to unpack .../31-python3-argon2_21.1.0-2build1_armhf.deb ... 670s Unpacking python3-argon2 (21.1.0-2build1) ... 670s Selecting previously unselected package python3-asttokens. 670s Preparing to unpack .../32-python3-asttokens_2.4.1-1_all.deb ... 670s Unpacking python3-asttokens (2.4.1-1) ... 670s Selecting previously unselected package python3-webencodings. 670s Preparing to unpack .../33-python3-webencodings_0.5.1-5_all.deb ... 670s Unpacking python3-webencodings (0.5.1-5) ... 670s Selecting previously unselected package python3-html5lib. 670s Preparing to unpack .../34-python3-html5lib_1.1-6_all.deb ... 670s Unpacking python3-html5lib (1.1-6) ... 670s Selecting previously unselected package python3-bleach. 670s Preparing to unpack .../35-python3-bleach_6.1.0-2_all.deb ... 670s Unpacking python3-bleach (6.1.0-2) ... 670s Selecting previously unselected package python3-soupsieve. 670s Preparing to unpack .../36-python3-soupsieve_2.5-1_all.deb ... 670s Unpacking python3-soupsieve (2.5-1) ... 670s Selecting previously unselected package python3-bs4. 670s Preparing to unpack .../37-python3-bs4_4.12.3-1_all.deb ... 670s Unpacking python3-bs4 (4.12.3-1) ... 670s Selecting previously unselected package python3-bytecode. 670s Preparing to unpack .../38-python3-bytecode_0.15.1-3_all.deb ... 670s Unpacking python3-bytecode (0.15.1-3) ... 670s Selecting previously unselected package python3-traitlets. 670s Preparing to unpack .../39-python3-traitlets_5.14.3-1_all.deb ... 670s Unpacking python3-traitlets (5.14.3-1) ... 670s Selecting previously unselected package python3-comm. 670s Preparing to unpack .../40-python3-comm_0.2.1-1_all.deb ... 670s Unpacking python3-comm (0.2.1-1) ... 670s Selecting previously unselected package python3-coverage. 670s Preparing to unpack .../41-python3-coverage_7.4.4+dfsg1-0ubuntu2_armhf.deb ... 670s Unpacking python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 671s Selecting previously unselected package python3-dateutil. 671s Preparing to unpack .../42-python3-dateutil_2.9.0-2_all.deb ... 671s Unpacking python3-dateutil (2.9.0-2) ... 671s Selecting previously unselected package python3-pydevd. 671s Preparing to unpack .../43-python3-pydevd_2.10.0+ds-10ubuntu1_armhf.deb ... 671s Unpacking python3-pydevd (2.10.0+ds-10ubuntu1) ... 671s Selecting previously unselected package python3-debugpy. 671s Preparing to unpack .../44-python3-debugpy_1.8.0+ds-4ubuntu4_all.deb ... 671s Unpacking python3-debugpy (1.8.0+ds-4ubuntu4) ... 671s Selecting previously unselected package python3-decorator. 671s Preparing to unpack .../45-python3-decorator_5.1.1-5_all.deb ... 671s Unpacking python3-decorator (5.1.1-5) ... 671s Selecting previously unselected package python3-defusedxml. 671s Preparing to unpack .../46-python3-defusedxml_0.7.1-2_all.deb ... 671s Unpacking python3-defusedxml (0.7.1-2) ... 671s Selecting previously unselected package python3-entrypoints. 671s Preparing to unpack .../47-python3-entrypoints_0.4-2_all.deb ... 671s Unpacking python3-entrypoints (0.4-2) ... 671s Selecting previously unselected package python3-executing. 671s Preparing to unpack .../48-python3-executing_2.0.1-0.1_all.deb ... 671s Unpacking python3-executing (2.0.1-0.1) ... 671s Selecting previously unselected package python3-fastjsonschema. 671s Preparing to unpack .../49-python3-fastjsonschema_2.19.1-1_all.deb ... 671s Unpacking python3-fastjsonschema (2.19.1-1) ... 671s Selecting previously unselected package python3-parso. 671s Preparing to unpack .../50-python3-parso_0.8.3-1_all.deb ... 671s Unpacking python3-parso (0.8.3-1) ... 671s Selecting previously unselected package python3-typeshed. 671s Preparing to unpack .../51-python3-typeshed_0.0~git20231111.6764465-3_all.deb ... 671s Unpacking python3-typeshed (0.0~git20231111.6764465-3) ... 672s Selecting previously unselected package python3-jedi. 672s Preparing to unpack .../52-python3-jedi_0.19.1+ds1-1_all.deb ... 672s Unpacking python3-jedi (0.19.1+ds1-1) ... 672s Selecting previously unselected package python3-matplotlib-inline. 672s Preparing to unpack .../53-python3-matplotlib-inline_0.1.6-2_all.deb ... 672s Unpacking python3-matplotlib-inline (0.1.6-2) ... 672s Selecting previously unselected package python3-ptyprocess. 672s Preparing to unpack .../54-python3-ptyprocess_0.7.0-5_all.deb ... 672s Unpacking python3-ptyprocess (0.7.0-5) ... 672s Selecting previously unselected package python3-pexpect. 672s Preparing to unpack .../55-python3-pexpect_4.9-2_all.deb ... 672s Unpacking python3-pexpect (4.9-2) ... 672s Selecting previously unselected package python3-wcwidth. 672s Preparing to unpack .../56-python3-wcwidth_0.2.5+dfsg1-1.1ubuntu1_all.deb ... 672s Unpacking python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 672s Selecting previously unselected package python3-prompt-toolkit. 672s Preparing to unpack .../57-python3-prompt-toolkit_3.0.46-1_all.deb ... 672s Unpacking python3-prompt-toolkit (3.0.46-1) ... 672s Selecting previously unselected package python3-pure-eval. 672s Preparing to unpack .../58-python3-pure-eval_0.2.2-2_all.deb ... 672s Unpacking python3-pure-eval (0.2.2-2) ... 672s Selecting previously unselected package python3-stack-data. 672s Preparing to unpack .../59-python3-stack-data_0.6.3-1_all.deb ... 672s Unpacking python3-stack-data (0.6.3-1) ... 672s Selecting previously unselected package python3-ipython. 672s Preparing to unpack .../60-python3-ipython_8.20.0-1ubuntu1_all.deb ... 672s Unpacking python3-ipython (8.20.0-1ubuntu1) ... 672s Selecting previously unselected package python3-platformdirs. 672s Preparing to unpack .../61-python3-platformdirs_4.2.1-1_all.deb ... 672s Unpacking python3-platformdirs (4.2.1-1) ... 672s Selecting previously unselected package python3-jupyter-core. 672s Preparing to unpack .../62-python3-jupyter-core_5.3.2-2_all.deb ... 672s Unpacking python3-jupyter-core (5.3.2-2) ... 672s Selecting previously unselected package python3-nest-asyncio. 672s Preparing to unpack .../63-python3-nest-asyncio_1.5.4-1_all.deb ... 672s Unpacking python3-nest-asyncio (1.5.4-1) ... 672s Selecting previously unselected package python3-tornado. 672s Preparing to unpack .../64-python3-tornado_6.4.1-1_armhf.deb ... 672s Unpacking python3-tornado (6.4.1-1) ... 672s Selecting previously unselected package python3-py. 672s Preparing to unpack .../65-python3-py_1.11.0-2_all.deb ... 672s Unpacking python3-py (1.11.0-2) ... 673s Selecting previously unselected package python3-zmq. 673s Preparing to unpack .../66-python3-zmq_24.0.1-5build1_armhf.deb ... 673s Unpacking python3-zmq (24.0.1-5build1) ... 673s Selecting previously unselected package python3-jupyter-client. 673s Preparing to unpack .../67-python3-jupyter-client_7.4.9-2ubuntu1_all.deb ... 673s Unpacking python3-jupyter-client (7.4.9-2ubuntu1) ... 673s Selecting previously unselected package python3-packaging. 673s Preparing to unpack .../68-python3-packaging_24.0-1_all.deb ... 673s Unpacking python3-packaging (24.0-1) ... 673s Selecting previously unselected package python3-psutil. 673s Preparing to unpack .../69-python3-psutil_5.9.8-2build2_armhf.deb ... 673s Unpacking python3-psutil (5.9.8-2build2) ... 673s Selecting previously unselected package python3-ipykernel. 673s Preparing to unpack .../70-python3-ipykernel_6.29.3-1ubuntu1_all.deb ... 673s Unpacking python3-ipykernel (6.29.3-1ubuntu1) ... 673s Selecting previously unselected package python3-ipython-genutils. 673s Preparing to unpack .../71-python3-ipython-genutils_0.2.0-6_all.deb ... 673s Unpacking python3-ipython-genutils (0.2.0-6) ... 673s Selecting previously unselected package python3-jupyterlab-pygments. 673s Preparing to unpack .../72-python3-jupyterlab-pygments_0.2.2-3_all.deb ... 673s Unpacking python3-jupyterlab-pygments (0.2.2-3) ... 673s Selecting previously unselected package python3-mistune. 673s Preparing to unpack .../73-python3-mistune_3.0.2-1_all.deb ... 673s Unpacking python3-mistune (3.0.2-1) ... 673s Selecting previously unselected package python3-nbformat. 673s Preparing to unpack .../74-python3-nbformat_5.9.1-1_all.deb ... 673s Unpacking python3-nbformat (5.9.1-1) ... 673s Selecting previously unselected package python3-nbclient. 673s Preparing to unpack .../75-python3-nbclient_0.8.0-1_all.deb ... 673s Unpacking python3-nbclient (0.8.0-1) ... 673s Selecting previously unselected package python3-pandocfilters. 673s Preparing to unpack .../76-python3-pandocfilters_1.5.1-1_all.deb ... 673s Unpacking python3-pandocfilters (1.5.1-1) ... 673s Selecting previously unselected package python3-tinycss2. 673s Preparing to unpack .../77-python3-tinycss2_1.3.0-1_all.deb ... 673s Unpacking python3-tinycss2 (1.3.0-1) ... 673s Selecting previously unselected package python3-nbconvert. 673s Preparing to unpack .../78-python3-nbconvert_7.16.4-1_all.deb ... 673s Unpacking python3-nbconvert (7.16.4-1) ... 673s Selecting previously unselected package libjs-codemirror. 673s Preparing to unpack .../79-libjs-codemirror_5.65.0+~cs5.83.9-3_all.deb ... 673s Unpacking libjs-codemirror (5.65.0+~cs5.83.9-3) ... 673s Selecting previously unselected package libjs-marked. 673s Preparing to unpack .../80-libjs-marked_4.2.3+ds+~4.0.7-3_all.deb ... 673s Unpacking libjs-marked (4.2.3+ds+~4.0.7-3) ... 673s Selecting previously unselected package libjs-mathjax. 673s Preparing to unpack .../81-libjs-mathjax_2.7.9+dfsg-1_all.deb ... 673s Unpacking libjs-mathjax (2.7.9+dfsg-1) ... 674s Selecting previously unselected package libjs-requirejs. 674s Preparing to unpack .../82-libjs-requirejs_2.3.6+ds+~2.1.37-1_all.deb ... 674s Unpacking libjs-requirejs (2.3.6+ds+~2.1.37-1) ... 674s Selecting previously unselected package libjs-requirejs-text. 674s Preparing to unpack .../83-libjs-requirejs-text_2.0.12-1.1_all.deb ... 674s Unpacking libjs-requirejs-text (2.0.12-1.1) ... 674s Selecting previously unselected package python3-terminado. 674s Preparing to unpack .../84-python3-terminado_0.18.1-1_all.deb ... 674s Unpacking python3-terminado (0.18.1-1) ... 674s Selecting previously unselected package python3-prometheus-client. 674s Preparing to unpack .../85-python3-prometheus-client_0.19.0+ds1-1_all.deb ... 674s Unpacking python3-prometheus-client (0.19.0+ds1-1) ... 674s Selecting previously unselected package python3-send2trash. 674s Preparing to unpack .../86-python3-send2trash_1.8.2-1_all.deb ... 674s Unpacking python3-send2trash (1.8.2-1) ... 674s Selecting previously unselected package python3-notebook. 674s Preparing to unpack .../87-python3-notebook_6.4.12-2.2ubuntu1_all.deb ... 674s Unpacking python3-notebook (6.4.12-2.2ubuntu1) ... 675s Selecting previously unselected package autopkgtest-satdep. 675s Preparing to unpack .../88-3-autopkgtest-satdep.deb ... 675s Unpacking autopkgtest-satdep (0) ... 675s Setting up python3-entrypoints (0.4-2) ... 675s Setting up libjs-jquery-typeahead (2.11.0+dfsg1-3) ... 675s Setting up python3-tornado (6.4.1-1) ... 675s Setting up libnorm1t64:armhf (1.5.9+dfsg-3.1build1) ... 675s Setting up python3-pure-eval (0.2.2-2) ... 675s Setting up python3-send2trash (1.8.2-1) ... 676s Setting up fonts-mathjax (2.7.9+dfsg-1) ... 676s Setting up libsodium23:armhf (1.0.18-1build3) ... 676s Setting up libjs-mathjax (2.7.9+dfsg-1) ... 676s Setting up python3-py (1.11.0-2) ... 676s Setting up libdebuginfod-common (0.191-1) ... 676s Setting up libjs-requirejs-text (2.0.12-1.1) ... 676s Setting up python3-parso (0.8.3-1) ... 676s Setting up python3-defusedxml (0.7.1-2) ... 676s Setting up python3-ipython-genutils (0.2.0-6) ... 676s Setting up python3-asttokens (2.4.1-1) ... 677s Setting up fonts-glyphicons-halflings (1.009~3.4.1+dfsg-3) ... 677s Setting up python3-all (3.12.3-0ubuntu1) ... 677s Setting up python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 677s Setting up libjs-moment (2.29.4+ds-1) ... 677s Setting up python3-pandocfilters (1.5.1-1) ... 677s Setting up libjs-requirejs (2.3.6+ds+~2.1.37-1) ... 677s Setting up libjs-es6-promise (4.2.8-12) ... 677s Setting up libjs-text-encoding (0.7.0-5) ... 677s Setting up python3-webencodings (0.5.1-5) ... 677s Setting up python3-platformdirs (4.2.1-1) ... 677s Setting up python3-psutil (5.9.8-2build2) ... 678s Setting up libsource-highlight-common (3.1.9-4.3build1) ... 678s Setting up libc6-dbg:armhf (2.39-0ubuntu9) ... 678s Setting up libdw1t64:armhf (0.191-1) ... 678s Setting up python3-jupyterlab-pygments (0.2.2-3) ... 678s Setting up libpython3.12t64:armhf (3.12.4-1) ... 678s Setting up libpgm-5.3-0t64:armhf (5.3.128~dfsg-2.1build1) ... 678s Setting up python3-decorator (5.1.1-5) ... 678s Setting up python3-packaging (24.0-1) ... 678s Setting up python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 678s Setting up node-jed (1.1.1-4) ... 678s Setting up python3-typeshed (0.0~git20231111.6764465-3) ... 678s Setting up python3-executing (2.0.1-0.1) ... 678s Setting up libjs-xterm (5.3.0-2) ... 678s Setting up python3-nest-asyncio (1.5.4-1) ... 679s Setting up python3-bytecode (0.15.1-3) ... 679s Setting up libjs-codemirror (5.65.0+~cs5.83.9-3) ... 679s Setting up libjs-jed (1.1.1-4) ... 679s Setting up python3-html5lib (1.1-6) ... 679s Setting up libbabeltrace1:armhf (1.5.11-3build3) ... 679s Setting up python3-fastjsonschema (2.19.1-1) ... 679s Setting up python3-traitlets (5.14.3-1) ... 679s Setting up python-tinycss2-common (1.3.0-1) ... 679s Setting up python3-argon2 (21.1.0-2build1) ... 679s Setting up python3-dateutil (2.9.0-2) ... 680s Setting up libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 680s Setting up python3-mistune (3.0.2-1) ... 680s Setting up python3-stack-data (0.6.3-1) ... 680s Setting up python3-soupsieve (2.5-1) ... 680s Setting up fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 680s Setting up python3-jupyter-core (5.3.2-2) ... 680s Setting up libjs-bootstrap (3.4.1+dfsg-3) ... 680s Setting up libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 680s Setting up python3-ptyprocess (0.7.0-5) ... 680s Setting up libjs-marked (4.2.3+ds+~4.0.7-3) ... 680s Setting up python3-prompt-toolkit (3.0.46-1) ... 681s Setting up libdebuginfod1t64:armhf (0.191-1) ... 681s Setting up python3-tinycss2 (1.3.0-1) ... 681s Setting up libzmq5:armhf (4.3.5-1build2) ... 681s Setting up python3-jedi (0.19.1+ds1-1) ... 681s Setting up libjs-bootstrap-tour (0.12.0+dfsg-5) ... 681s Setting up libjs-backbone (1.4.1~dfsg+~1.4.15-3) ... 681s Setting up libsource-highlight4t64:armhf (3.1.9-4.3build1) ... 681s Setting up python3-nbformat (5.9.1-1) ... 681s Setting up python3-bs4 (4.12.3-1) ... 682s Setting up python3-bleach (6.1.0-2) ... 682s Setting up python3-matplotlib-inline (0.1.6-2) ... 682s Setting up python3-comm (0.2.1-1) ... 682s Setting up python3-prometheus-client (0.19.0+ds1-1) ... 682s Setting up gdb (15.0.50.20240403-0ubuntu1) ... 682s Setting up libjs-jquery-ui (1.13.2+dfsg-1) ... 682s Setting up python3-pexpect (4.9-2) ... 682s Setting up python3-zmq (24.0.1-5build1) ... 683s Setting up python3-terminado (0.18.1-1) ... 683s Setting up python3-jupyter-client (7.4.9-2ubuntu1) ... 683s Setting up python3-pydevd (2.10.0+ds-10ubuntu1) ... 684s Setting up python3-debugpy (1.8.0+ds-4ubuntu4) ... 684s Setting up python3-nbclient (0.8.0-1) ... 684s Setting up python3-ipython (8.20.0-1ubuntu1) ... 685s Setting up python3-ipykernel (6.29.3-1ubuntu1) ... 685s Setting up python3-nbconvert (7.16.4-1) ... 685s Setting up python3-notebook (6.4.12-2.2ubuntu1) ... 685s Setting up autopkgtest-satdep (0) ... 685s Processing triggers for man-db (2.12.1-2) ... 686s Processing triggers for libc-bin (2.39-0ubuntu9) ... 705s (Reading database ... 75054 files and directories currently installed.) 705s Removing autopkgtest-satdep (0) ... 716s autopkgtest [10:39:15]: test autodep8-python3: set -e ; for py in $(py3versions -r 2>/dev/null) ; do cd "$AUTOPKGTEST_TMP" ; echo "Testing with $py:" ; $py -c "import notebook; print(notebook)" ; done 716s autopkgtest [10:39:15]: test autodep8-python3: [----------------------- 718s Testing with python3.12: 718s 719s autopkgtest [10:39:18]: test autodep8-python3: -----------------------] 722s autopkgtest [10:39:21]: test autodep8-python3: - - - - - - - - - - results - - - - - - - - - - 722s autodep8-python3 PASS (superficial) 726s autopkgtest [10:39:25]: @@@@@@@@@@@@@@@@@@@@ summary 726s pytest FAIL non-zero exit status 1 726s command1 PASS (superficial) 726s autodep8-python3 PASS (superficial)